[jira] [Resolved] (KAFKA-9372) Add producer config to make topicExpiry configurable

2020-02-26 Thread Jiao Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiao Zhang resolved KAFKA-9372.
---
Resolution: Duplicate

let me close it as this issue could be covered by 
https://issues.apache.org/jira/browse/KAFKA-8904

> Add producer config to make topicExpiry configurable
> 
>
> Key: KAFKA-9372
> URL: https://issues.apache.org/jira/browse/KAFKA-9372
> Project: Kafka
>  Issue Type: Improvement
>  Components: producer 
>Affects Versions: 1.1.0
>Reporter: Jiao Zhang
>Assignee: Brian Byrne
>Priority: Minor
>
> Sometimes we got error "org.apache.kafka.common.errors.TimeoutException: 
> Failed to update metadata after 1000 ms" on producer side. We did the 
> investigation and found
>  # our producer produced messages in really low rate, the interval is more 
> than 10 minutes
>  # by default, producer would expire topics after TOPIC_EXPIRY_MS, after 
> topic expired if no data produce before next metadata update (automatically 
> triggered by metadata.max.age.ms) partitions entry for the topic would 
> disappear from the Metadata cache As a result, almost for every time's 
> produce, producer need fetch metadata which could possibly end with timeout.
> To solve this, we propose to add a new config metadata.topic.expiry for 
> producer to make topicExpiry configurable. Topic expiry is good only when 
> producer is long-lived and is used for producing variable counts of topics. 
> But in the case that producers are bounded to single or few fixed topics, 
> there is no need to expire topics at all.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9616) Add new metrics to get total response time with throttle time subtracted

2020-02-26 Thread Jiao Zhang (Jira)
Jiao Zhang created KAFKA-9616:
-

 Summary: Add new metrics to get total response time with throttle 
time subtracted
 Key: KAFKA-9616
 URL: https://issues.apache.org/jira/browse/KAFKA-9616
 Project: Kafka
  Issue Type: Improvement
  Components: core
Affects Versions: 1.1.0
Reporter: Jiao Zhang


We are using these RequestMetrics for our cluster monitoring 
[https://github.com/apache/kafka/blob/fb5bd9eb7cdfdae8ed1ea8f68e9be5687f610b28/core/src/main/scala/kafka/network/RequestChannel.scala#L364]

and config our AlertManager to fire alerts if 99th value of 'TotalTimeMs' 
exceeds the threshold value. This alert is very important as it really notifies 
cluster administrators the bad situation for example when one server is bailed 
out from cluster or lost leadership.

But we suffer from false alerts sometimes. This is the case. We set quota like 
'producer_byte_rate' for some clients, so when requests from these clients are 
throttled, 'ThrottleTimeMs' is long and sometimes due to throttle 'TotalTimeMs' 
exceeds the threshold value and alert is triggered. As a result we have to put 
some time to check details for false alerts either.

So this ticket proposes to add a new metrics 'ProcessTimeMs', the value of 
which is total response time with throttle time subtracted. This metrics is 
more accurate and could help us only notice the really unexpected situation.

Btw, we tried to achieve this by using PromQL against existing metrics, like 
Total - Throttle. But it does not work as it seems these two metrics are 
inconsistent in time. So better to expose a new metrics from broker side.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [ANNOUNCE] New committer: Konstantine Karantasis

2020-02-26 Thread Satish Duggana
Congrats Konstantine!!

On Thu, Feb 27, 2020 at 12:35 PM Tom Bentley  wrote:
>
> Congratulations!
>
> On Thu, Feb 27, 2020 at 6:43 AM David Jacot  wrote:
>
> > Congrats!
> >
> > Le jeu. 27 févr. 2020 à 06:58, Vahid Hashemian 
> > a écrit :
> >
> > > Congratulations Konstantine!
> > >
> > > Regards,
> > > --Vahid
> > >
> > > On Wed, Feb 26, 2020 at 6:49 PM Boyang Chen 
> > > wrote:
> > >
> > > > Congrats Konstantine!
> > > >
> > > > On Wed, Feb 26, 2020 at 6:32 PM Manikumar 
> > > > wrote:
> > > >
> > > > > Congrats Konstantine!
> > > >
> > > >
> > > > > On Thu, Feb 27, 2020 at 7:46 AM Matthias J. Sax 
> > > > wrote:
> > > > >
> > > > > > -BEGIN PGP SIGNED MESSAGE-
> > > > > > Hash: SHA512
> > > > > >
> > > > > > Congrats!
> > > > > >
> > > > > > On 2/27/20 2:21 AM, Jeremy Custenborder wrote:
> > > > > > > Congrats Konstantine!
> > > > > > >
> > > > > > > On Wed, Feb 26, 2020 at 2:39 PM Bill Bejeck 
> > > > > > > wrote:
> > > > > > >>
> > > > > > >> Congratulations Konstantine! Well deserved.
> > > > > > >>
> > > > > > >> -Bill
> > > > > > >>
> > > > > > >> On Wed, Feb 26, 2020 at 5:37 PM Jason Gustafson
> > > > > > >>  wrote:
> > > > > > >>
> > > > > > >>> The PMC for Apache Kafka has invited Konstantine Karantasis as
> > > > > > >>> a committer and we are pleased to announce that he has
> > > > > > >>> accepted!
> > > > > > >>>
> > > > > > >>> Konstantine has contributed 56 patches and helped to review
> > > > > > >>> even more. His recent work includes a major overhaul of the
> > > > > > >>> Connect task management system in order to support incremental
> > > > > > >>> rebalancing. In addition to code contributions, Konstantine
> > > > > > >>> helps the community in many other ways including talks at
> > > > > > >>> meetups and at Kafka Summit and answering questions on
> > > > > > >>> stackoverflow. He consistently shows good judgement in design
> > > > > > >>> and a careful attention to details when it comes to code.
> > > > > > >>>
> > > > > > >>> Thanks for all the contributions and looking forward to more!
> > > > > > >>>
> > > > > > >>> Jason, on behalf of the Apache Kafka PMC
> > > > > > >>>
> > > > > > -BEGIN PGP SIGNATURE-
> > > > > >
> > > > > > iQIzBAEBCgAdFiEEI8mthP+5zxXZZdDSO4miYXKq/OgFAl5XJl0ACgkQO4miYXKq
> > > > > > /OjEERAAp08DioD903r9aqGCJ3oHbkUi2ZwEr0yeu1veFD3rGFflhni7sm4J87/Z
> > > > > > NArUdpHQ7+99YATtm+HY8gHN+eQeVY1+NV4lX77vPNLEONx09aIxbcSnc4Ih7JnX
> > > > > > XxcHHBeR6N9EJkMQ82nOUk5cQaDiIdkOvbOv8+NoOcZC6RMy9PKsFgwd8NTMKL8l
> > > > > > nqG8KwnV6WYNOJ05BszFSpTwJBJBg8zhZlRcmAF7iRD0cLzM4BReOmEVCcoas7ar
> > > > > > RK38okIAALDRkC9JNYpQ/s0si4V+OwP4igp0MAjM+Y2NVLhC6kK6uqNzfeD21M6U
> > > > > > mWm7nE9Tbh/K+8hgZbqfprN6vw6+NfU8dwPD0iaEOfisXwbavCfDeonwSWK0BoHo
> > > > > > zEeHRGEx7e2FHWp8KyC6XgfFWmkWJP6tCWiTtCFEScxSTzZUC+cG+a5PF1n6hIHo
> > > > > > /CH3Oml2ZGDxoEl1zt8Hs5AgKW8X4PQCsfA4LWqA4GgR6PPFPn6g8mX3/AR3wkyn
> > > > > > 8Dmlh3k8ZtsW8wX26IYBywm/yyjbnlSzRVSgAHAbpaIqe5PoMG+5fdc1tnO/2Tuf
> > > > > > jd1BjbgAD6u5BksmIGBeZXADblQ/qqfp5Q+WRTJSYLItf8HMNAZoqJFp0bRy/QXP
> > > > > > 5EZzI9J+ngJG8MYr08UcWQMZt0ytwBVTX/+FVC8Rx5r0D0fqizo=
> > > > > > =68IH
> > > > > > -END PGP SIGNATURE-
> > > > > >
> > > > >
> > > >
> > >
> > >
> > > --
> > >
> > > Thanks!
> > > --Vahid
> > >
> >


Re: [ANNOUNCE] New committer: Konstantine Karantasis

2020-02-26 Thread Tom Bentley
Congratulations!

On Thu, Feb 27, 2020 at 6:43 AM David Jacot  wrote:

> Congrats!
>
> Le jeu. 27 févr. 2020 à 06:58, Vahid Hashemian 
> a écrit :
>
> > Congratulations Konstantine!
> >
> > Regards,
> > --Vahid
> >
> > On Wed, Feb 26, 2020 at 6:49 PM Boyang Chen 
> > wrote:
> >
> > > Congrats Konstantine!
> > >
> > > On Wed, Feb 26, 2020 at 6:32 PM Manikumar 
> > > wrote:
> > >
> > > > Congrats Konstantine!
> > >
> > >
> > > > On Thu, Feb 27, 2020 at 7:46 AM Matthias J. Sax 
> > > wrote:
> > > >
> > > > > -BEGIN PGP SIGNED MESSAGE-
> > > > > Hash: SHA512
> > > > >
> > > > > Congrats!
> > > > >
> > > > > On 2/27/20 2:21 AM, Jeremy Custenborder wrote:
> > > > > > Congrats Konstantine!
> > > > > >
> > > > > > On Wed, Feb 26, 2020 at 2:39 PM Bill Bejeck 
> > > > > > wrote:
> > > > > >>
> > > > > >> Congratulations Konstantine! Well deserved.
> > > > > >>
> > > > > >> -Bill
> > > > > >>
> > > > > >> On Wed, Feb 26, 2020 at 5:37 PM Jason Gustafson
> > > > > >>  wrote:
> > > > > >>
> > > > > >>> The PMC for Apache Kafka has invited Konstantine Karantasis as
> > > > > >>> a committer and we are pleased to announce that he has
> > > > > >>> accepted!
> > > > > >>>
> > > > > >>> Konstantine has contributed 56 patches and helped to review
> > > > > >>> even more. His recent work includes a major overhaul of the
> > > > > >>> Connect task management system in order to support incremental
> > > > > >>> rebalancing. In addition to code contributions, Konstantine
> > > > > >>> helps the community in many other ways including talks at
> > > > > >>> meetups and at Kafka Summit and answering questions on
> > > > > >>> stackoverflow. He consistently shows good judgement in design
> > > > > >>> and a careful attention to details when it comes to code.
> > > > > >>>
> > > > > >>> Thanks for all the contributions and looking forward to more!
> > > > > >>>
> > > > > >>> Jason, on behalf of the Apache Kafka PMC
> > > > > >>>
> > > > > -BEGIN PGP SIGNATURE-
> > > > >
> > > > > iQIzBAEBCgAdFiEEI8mthP+5zxXZZdDSO4miYXKq/OgFAl5XJl0ACgkQO4miYXKq
> > > > > /OjEERAAp08DioD903r9aqGCJ3oHbkUi2ZwEr0yeu1veFD3rGFflhni7sm4J87/Z
> > > > > NArUdpHQ7+99YATtm+HY8gHN+eQeVY1+NV4lX77vPNLEONx09aIxbcSnc4Ih7JnX
> > > > > XxcHHBeR6N9EJkMQ82nOUk5cQaDiIdkOvbOv8+NoOcZC6RMy9PKsFgwd8NTMKL8l
> > > > > nqG8KwnV6WYNOJ05BszFSpTwJBJBg8zhZlRcmAF7iRD0cLzM4BReOmEVCcoas7ar
> > > > > RK38okIAALDRkC9JNYpQ/s0si4V+OwP4igp0MAjM+Y2NVLhC6kK6uqNzfeD21M6U
> > > > > mWm7nE9Tbh/K+8hgZbqfprN6vw6+NfU8dwPD0iaEOfisXwbavCfDeonwSWK0BoHo
> > > > > zEeHRGEx7e2FHWp8KyC6XgfFWmkWJP6tCWiTtCFEScxSTzZUC+cG+a5PF1n6hIHo
> > > > > /CH3Oml2ZGDxoEl1zt8Hs5AgKW8X4PQCsfA4LWqA4GgR6PPFPn6g8mX3/AR3wkyn
> > > > > 8Dmlh3k8ZtsW8wX26IYBywm/yyjbnlSzRVSgAHAbpaIqe5PoMG+5fdc1tnO/2Tuf
> > > > > jd1BjbgAD6u5BksmIGBeZXADblQ/qqfp5Q+WRTJSYLItf8HMNAZoqJFp0bRy/QXP
> > > > > 5EZzI9J+ngJG8MYr08UcWQMZt0ytwBVTX/+FVC8Rx5r0D0fqizo=
> > > > > =68IH
> > > > > -END PGP SIGNATURE-
> > > > >
> > > >
> > >
> >
> >
> > --
> >
> > Thanks!
> > --Vahid
> >
>


Build failed in Jenkins: kafka-2.4-jdk8 #157

2020-02-26 Thread Apache Jenkins Server
See 


Changes:

[rhauch] KAFKA-9601: Stop logging raw connector config values (#8165)


--
[...truncated 2.75 MB...]

org.apache.kafka.streams.MockProcessorContextTest > 
fullConstructorShouldSetAllExpectedAttributes PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureCommitsAndAllowReset STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureCommitsAndAllowReset PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldThrowIfForwardedWithDeprecatedChildName STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldThrowIfForwardedWithDeprecatedChildName PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldThrowIfForwardedWithDeprecatedChildIndex STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldThrowIfForwardedWithDeprecatedChildIndex PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureApplicationAndRecordMetadata STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureApplicationAndRecordMetadata PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureRecordsOutputToChildByName STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureRecordsOutputToChildByName PASSED

org.apache.kafka.streams.MockProcessorContextTest > shouldCapturePunctuator 
STARTED

org.apache.kafka.streams.MockProcessorContextTest > shouldCapturePunctuator 
PASSED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver PASSED

org.apache.kafka.streams.TestTopicsTest > testValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testValueList PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordList PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testInputToString STARTED


Re: [ANNOUNCE] New committer: Konstantine Karantasis

2020-02-26 Thread David Jacot
Congrats!

Le jeu. 27 févr. 2020 à 06:58, Vahid Hashemian 
a écrit :

> Congratulations Konstantine!
>
> Regards,
> --Vahid
>
> On Wed, Feb 26, 2020 at 6:49 PM Boyang Chen 
> wrote:
>
> > Congrats Konstantine!
> >
> > On Wed, Feb 26, 2020 at 6:32 PM Manikumar 
> > wrote:
> >
> > > Congrats Konstantine!
> >
> >
> > > On Thu, Feb 27, 2020 at 7:46 AM Matthias J. Sax 
> > wrote:
> > >
> > > > -BEGIN PGP SIGNED MESSAGE-
> > > > Hash: SHA512
> > > >
> > > > Congrats!
> > > >
> > > > On 2/27/20 2:21 AM, Jeremy Custenborder wrote:
> > > > > Congrats Konstantine!
> > > > >
> > > > > On Wed, Feb 26, 2020 at 2:39 PM Bill Bejeck 
> > > > > wrote:
> > > > >>
> > > > >> Congratulations Konstantine! Well deserved.
> > > > >>
> > > > >> -Bill
> > > > >>
> > > > >> On Wed, Feb 26, 2020 at 5:37 PM Jason Gustafson
> > > > >>  wrote:
> > > > >>
> > > > >>> The PMC for Apache Kafka has invited Konstantine Karantasis as
> > > > >>> a committer and we are pleased to announce that he has
> > > > >>> accepted!
> > > > >>>
> > > > >>> Konstantine has contributed 56 patches and helped to review
> > > > >>> even more. His recent work includes a major overhaul of the
> > > > >>> Connect task management system in order to support incremental
> > > > >>> rebalancing. In addition to code contributions, Konstantine
> > > > >>> helps the community in many other ways including talks at
> > > > >>> meetups and at Kafka Summit and answering questions on
> > > > >>> stackoverflow. He consistently shows good judgement in design
> > > > >>> and a careful attention to details when it comes to code.
> > > > >>>
> > > > >>> Thanks for all the contributions and looking forward to more!
> > > > >>>
> > > > >>> Jason, on behalf of the Apache Kafka PMC
> > > > >>>
> > > > -BEGIN PGP SIGNATURE-
> > > >
> > > > iQIzBAEBCgAdFiEEI8mthP+5zxXZZdDSO4miYXKq/OgFAl5XJl0ACgkQO4miYXKq
> > > > /OjEERAAp08DioD903r9aqGCJ3oHbkUi2ZwEr0yeu1veFD3rGFflhni7sm4J87/Z
> > > > NArUdpHQ7+99YATtm+HY8gHN+eQeVY1+NV4lX77vPNLEONx09aIxbcSnc4Ih7JnX
> > > > XxcHHBeR6N9EJkMQ82nOUk5cQaDiIdkOvbOv8+NoOcZC6RMy9PKsFgwd8NTMKL8l
> > > > nqG8KwnV6WYNOJ05BszFSpTwJBJBg8zhZlRcmAF7iRD0cLzM4BReOmEVCcoas7ar
> > > > RK38okIAALDRkC9JNYpQ/s0si4V+OwP4igp0MAjM+Y2NVLhC6kK6uqNzfeD21M6U
> > > > mWm7nE9Tbh/K+8hgZbqfprN6vw6+NfU8dwPD0iaEOfisXwbavCfDeonwSWK0BoHo
> > > > zEeHRGEx7e2FHWp8KyC6XgfFWmkWJP6tCWiTtCFEScxSTzZUC+cG+a5PF1n6hIHo
> > > > /CH3Oml2ZGDxoEl1zt8Hs5AgKW8X4PQCsfA4LWqA4GgR6PPFPn6g8mX3/AR3wkyn
> > > > 8Dmlh3k8ZtsW8wX26IYBywm/yyjbnlSzRVSgAHAbpaIqe5PoMG+5fdc1tnO/2Tuf
> > > > jd1BjbgAD6u5BksmIGBeZXADblQ/qqfp5Q+WRTJSYLItf8HMNAZoqJFp0bRy/QXP
> > > > 5EZzI9J+ngJG8MYr08UcWQMZt0ytwBVTX/+FVC8Rx5r0D0fqizo=
> > > > =68IH
> > > > -END PGP SIGNATURE-
> > > >
> > >
> >
>
>
> --
>
> Thanks!
> --Vahid
>


Build failed in Jenkins: kafka-2.3-jdk8 #182

2020-02-26 Thread Apache Jenkins Server
See 


Changes:

[rhauch] KAFKA-9601: Stop logging raw connector config values (#8165)


--
[...truncated 2.95 MB...]

kafka.controller.ControllerChannelManagerTest > testStopReplicaRequestSent 
STARTED

kafka.controller.ControllerChannelManagerTest > testStopReplicaRequestSent 
PASSED

kafka.controller.ControllerChannelManagerTest > 
testStopReplicaRequestsWhileTopicDeletionStarted STARTED

kafka.controller.ControllerChannelManagerTest > 
testStopReplicaRequestsWhileTopicDeletionStarted PASSED

kafka.controller.ControllerChannelManagerTest > testLeaderAndIsrRequestSent 
STARTED

kafka.controller.ControllerChannelManagerTest > testLeaderAndIsrRequestSent 
PASSED

kafka.common.InterBrokerSendThreadTest > 
shouldCreateClientRequestAndSendWhenNodeIsReady STARTED

kafka.common.InterBrokerSendThreadTest > 
shouldCreateClientRequestAndSendWhenNodeIsReady PASSED

kafka.common.InterBrokerSendThreadTest > testFailingExpiredRequests STARTED

kafka.common.InterBrokerSendThreadTest > testFailingExpiredRequests PASSED

kafka.common.InterBrokerSendThreadTest > 
shouldCallCompletionHandlerWithDisconnectedResponseWhenNodeNotReady STARTED

kafka.common.InterBrokerSendThreadTest > 
shouldCallCompletionHandlerWithDisconnectedResponseWhenNodeNotReady PASSED

kafka.common.InterBrokerSendThreadTest > shouldNotSendAnythingWhenNoRequests 
STARTED

kafka.common.InterBrokerSendThreadTest > shouldNotSendAnythingWhenNoRequests 
PASSED

kafka.common.ZkNodeChangeNotificationListenerTest > testProcessNotification 
STARTED

kafka.common.ZkNodeChangeNotificationListenerTest > testProcessNotification 
PASSED

kafka.common.ZkNodeChangeNotificationListenerTest > 
testSwallowsProcessorException STARTED

kafka.common.ZkNodeChangeNotificationListenerTest > 
testSwallowsProcessorException PASSED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > testSessionExpireListenerMetrics STARTED

kafka.metrics.MetricsTest > testSessionExpireListenerMetrics PASSED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > testClusterIdMetric STARTED

kafka.metrics.MetricsTest > testClusterIdMetric PASSED

kafka.metrics.MetricsTest > testControllerMetrics STARTED

kafka.metrics.MetricsTest > testControllerMetrics PASSED

kafka.metrics.MetricsTest > testWindowsStyleTagNames STARTED

kafka.metrics.MetricsTest > testWindowsStyleTagNames PASSED

kafka.metrics.MetricsTest > testBrokerTopicMetricsBytesInOut STARTED

kafka.metrics.MetricsTest > testBrokerTopicMetricsBytesInOut PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer STARTED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.tools.ConsumerPerformanceTest > testDetailedHeaderMatchBody STARTED

kafka.tools.ConsumerPerformanceTest > testDetailedHeaderMatchBody PASSED

kafka.tools.ConsumerPerformanceTest > testConfigWithUnrecognizedOption STARTED

kafka.tools.ConsumerPerformanceTest > testConfigWithUnrecognizedOption PASSED

kafka.tools.ConsumerPerformanceTest > testConfig STARTED

kafka.tools.ConsumerPerformanceTest > testConfig PASSED

kafka.tools.ConsumerPerformanceTest > testNonDetailedHeaderMatchBody STARTED

kafka.tools.ConsumerPerformanceTest > testNonDetailedHeaderMatchBody PASSED

kafka.tools.ConsoleProducerTest > testParseKeyProp STARTED

kafka.tools.ConsoleProducerTest > testParseKeyProp PASSED

kafka.tools.ConsoleProducerTest > testInvalidConfigs STARTED

kafka.tools.ConsoleProducerTest > testInvalidConfigs PASSED

kafka.tools.ConsoleProducerTest > testValidConfigs STARTED

kafka.tools.ConsoleProducerTest > testValidConfigs PASSED

kafka.tools.ReplicaVerificationToolTest > testReplicaBufferVerifyChecksum 
STARTED

kafka.tools.ReplicaVerificationToolTest > testReplicaBufferVerifyChecksum PASSED

kafka.tools.ConsoleConsumerTest > shouldLimitReadsToMaxMessageLimit STARTED

kafka.tools.ConsoleConsumerTest > shouldLimitReadsToMaxMessageLimit PASSED

kafka.tools.ConsoleConsumerTest > shouldParseGroupIdFromBeginningGivenTogether 
STARTED

kafka.tools.ConsoleConsumerTest > shouldParseGroupIdFromBeginningGivenTogether 
PASSED

kafka.tools.ConsoleConsumerTest > shouldExitOnOffsetWithoutPartition STARTED

kafka.tools.ConsoleConsumerTest > shouldExitOnOffsetWithoutPartition PASSED

kafka.tools.ConsoleConsumerTest > shouldStopWhenOutputCheckErrorFails STARTED

kafka.tools.ConsoleConsumerTest > shouldStopWhenOutputCheckErrorFails PASSED

kafka.tools.ConsoleConsumerTest > 
shouldExitOnInvalidConfigWithAutoOffsetResetAndConflictingFromBeginning STARTED

kafka.tools.ConsoleConsumerTest > 
shouldExitOnInvalidConfigWithAutoOffsetResetAndConflictingFromBeginning PASSED

kafka.tools.ConsoleConsumerTest > 

Re: [ANNOUNCE] New committer: Konstantine Karantasis

2020-02-26 Thread Vahid Hashemian
Congratulations Konstantine!

Regards,
--Vahid

On Wed, Feb 26, 2020 at 6:49 PM Boyang Chen 
wrote:

> Congrats Konstantine!
>
> On Wed, Feb 26, 2020 at 6:32 PM Manikumar 
> wrote:
>
> > Congrats Konstantine!
>
>
> > On Thu, Feb 27, 2020 at 7:46 AM Matthias J. Sax 
> wrote:
> >
> > > -BEGIN PGP SIGNED MESSAGE-
> > > Hash: SHA512
> > >
> > > Congrats!
> > >
> > > On 2/27/20 2:21 AM, Jeremy Custenborder wrote:
> > > > Congrats Konstantine!
> > > >
> > > > On Wed, Feb 26, 2020 at 2:39 PM Bill Bejeck 
> > > > wrote:
> > > >>
> > > >> Congratulations Konstantine! Well deserved.
> > > >>
> > > >> -Bill
> > > >>
> > > >> On Wed, Feb 26, 2020 at 5:37 PM Jason Gustafson
> > > >>  wrote:
> > > >>
> > > >>> The PMC for Apache Kafka has invited Konstantine Karantasis as
> > > >>> a committer and we are pleased to announce that he has
> > > >>> accepted!
> > > >>>
> > > >>> Konstantine has contributed 56 patches and helped to review
> > > >>> even more. His recent work includes a major overhaul of the
> > > >>> Connect task management system in order to support incremental
> > > >>> rebalancing. In addition to code contributions, Konstantine
> > > >>> helps the community in many other ways including talks at
> > > >>> meetups and at Kafka Summit and answering questions on
> > > >>> stackoverflow. He consistently shows good judgement in design
> > > >>> and a careful attention to details when it comes to code.
> > > >>>
> > > >>> Thanks for all the contributions and looking forward to more!
> > > >>>
> > > >>> Jason, on behalf of the Apache Kafka PMC
> > > >>>
> > > -BEGIN PGP SIGNATURE-
> > >
> > > iQIzBAEBCgAdFiEEI8mthP+5zxXZZdDSO4miYXKq/OgFAl5XJl0ACgkQO4miYXKq
> > > /OjEERAAp08DioD903r9aqGCJ3oHbkUi2ZwEr0yeu1veFD3rGFflhni7sm4J87/Z
> > > NArUdpHQ7+99YATtm+HY8gHN+eQeVY1+NV4lX77vPNLEONx09aIxbcSnc4Ih7JnX
> > > XxcHHBeR6N9EJkMQ82nOUk5cQaDiIdkOvbOv8+NoOcZC6RMy9PKsFgwd8NTMKL8l
> > > nqG8KwnV6WYNOJ05BszFSpTwJBJBg8zhZlRcmAF7iRD0cLzM4BReOmEVCcoas7ar
> > > RK38okIAALDRkC9JNYpQ/s0si4V+OwP4igp0MAjM+Y2NVLhC6kK6uqNzfeD21M6U
> > > mWm7nE9Tbh/K+8hgZbqfprN6vw6+NfU8dwPD0iaEOfisXwbavCfDeonwSWK0BoHo
> > > zEeHRGEx7e2FHWp8KyC6XgfFWmkWJP6tCWiTtCFEScxSTzZUC+cG+a5PF1n6hIHo
> > > /CH3Oml2ZGDxoEl1zt8Hs5AgKW8X4PQCsfA4LWqA4GgR6PPFPn6g8mX3/AR3wkyn
> > > 8Dmlh3k8ZtsW8wX26IYBywm/yyjbnlSzRVSgAHAbpaIqe5PoMG+5fdc1tnO/2Tuf
> > > jd1BjbgAD6u5BksmIGBeZXADblQ/qqfp5Q+WRTJSYLItf8HMNAZoqJFp0bRy/QXP
> > > 5EZzI9J+ngJG8MYr08UcWQMZt0ytwBVTX/+FVC8Rx5r0D0fqizo=
> > > =68IH
> > > -END PGP SIGNATURE-
> > >
> >
>


-- 

Thanks!
--Vahid


Build failed in Jenkins: kafka-trunk-jdk8 #4277

2020-02-26 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9607: Do not clear partition queues during close (#8168)


--
[...truncated 2.88 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > 

Build failed in Jenkins: kafka-2.0-jdk8 #311

2020-02-26 Thread Apache Jenkins Server
See 


Changes:

[rhauch] KAFKA-9601: Stop logging raw connector config values (#8165)


--
[...truncated 894.38 KB...]

kafka.zk.KafkaZkClientTest > testEntityConfigManagementMethods PASSED

kafka.zk.KafkaZkClientTest > testUpdateLeaderAndIsr STARTED

kafka.zk.KafkaZkClientTest > testUpdateLeaderAndIsr PASSED

kafka.zk.KafkaZkClientTest > testUpdateBrokerInfo STARTED

kafka.zk.KafkaZkClientTest > testUpdateBrokerInfo PASSED

kafka.zk.KafkaZkClientTest > testCreateRecursive STARTED

kafka.zk.KafkaZkClientTest > testCreateRecursive PASSED

kafka.zk.KafkaZkClientTest > testGetConsumerOffsetNoData STARTED

kafka.zk.KafkaZkClientTest > testGetConsumerOffsetNoData PASSED

kafka.zk.KafkaZkClientTest > testDeleteTopicPathMethods STARTED

kafka.zk.KafkaZkClientTest > testDeleteTopicPathMethods PASSED

kafka.zk.KafkaZkClientTest > testSetTopicPartitionStatesRaw STARTED

kafka.zk.KafkaZkClientTest > testSetTopicPartitionStatesRaw PASSED

kafka.zk.KafkaZkClientTest > testAclManagementMethods STARTED

kafka.zk.KafkaZkClientTest > testAclManagementMethods PASSED

kafka.zk.KafkaZkClientTest > testPreferredReplicaElectionMethods STARTED

kafka.zk.KafkaZkClientTest > testPreferredReplicaElectionMethods PASSED

kafka.zk.KafkaZkClientTest > testPropagateLogDir STARTED

kafka.zk.KafkaZkClientTest > testPropagateLogDir PASSED

kafka.zk.KafkaZkClientTest > testGetDataAndStat STARTED

kafka.zk.KafkaZkClientTest > testGetDataAndStat PASSED

kafka.zk.KafkaZkClientTest > testReassignPartitionsInProgress STARTED

kafka.zk.KafkaZkClientTest > testReassignPartitionsInProgress PASSED

kafka.zk.KafkaZkClientTest > testCreateTopLevelPaths STARTED

kafka.zk.KafkaZkClientTest > testCreateTopLevelPaths PASSED

kafka.zk.KafkaZkClientTest > testIsrChangeNotificationGetters STARTED

kafka.zk.KafkaZkClientTest > testIsrChangeNotificationGetters PASSED

kafka.zk.KafkaZkClientTest > testLogDirEventNotificationsDeletion STARTED

kafka.zk.KafkaZkClientTest > testLogDirEventNotificationsDeletion PASSED

kafka.zk.KafkaZkClientTest > testGetLogConfigs STARTED

kafka.zk.KafkaZkClientTest > testGetLogConfigs PASSED

kafka.zk.KafkaZkClientTest > testBrokerSequenceIdMethods STARTED

kafka.zk.KafkaZkClientTest > testBrokerSequenceIdMethods PASSED

kafka.zk.KafkaZkClientTest > testCreateSequentialPersistentPath STARTED

kafka.zk.KafkaZkClientTest > testCreateSequentialPersistentPath PASSED

kafka.zk.KafkaZkClientTest > testConditionalUpdatePath STARTED

kafka.zk.KafkaZkClientTest > testConditionalUpdatePath PASSED

kafka.zk.KafkaZkClientTest > testDeleteTopicZNode STARTED

kafka.zk.KafkaZkClientTest > testDeleteTopicZNode PASSED

kafka.zk.KafkaZkClientTest > testDeletePath STARTED

kafka.zk.KafkaZkClientTest > testDeletePath PASSED

kafka.zk.KafkaZkClientTest > testGetBrokerMethods STARTED

kafka.zk.KafkaZkClientTest > testGetBrokerMethods PASSED

kafka.zk.KafkaZkClientTest > testCreateTokenChangeNotification STARTED

kafka.zk.KafkaZkClientTest > testCreateTokenChangeNotification PASSED

kafka.zk.KafkaZkClientTest > testGetTopicsAndPartitions STARTED

kafka.zk.KafkaZkClientTest > testGetTopicsAndPartitions PASSED

kafka.zk.KafkaZkClientTest > testRegisterBrokerInfo STARTED

kafka.zk.KafkaZkClientTest > testRegisterBrokerInfo PASSED

kafka.zk.KafkaZkClientTest > testConsumerOffsetPath STARTED

kafka.zk.KafkaZkClientTest > testConsumerOffsetPath PASSED

kafka.zk.KafkaZkClientTest > testControllerManagementMethods STARTED

kafka.zk.KafkaZkClientTest > testControllerManagementMethods PASSED

kafka.zk.KafkaZkClientTest > testTopicAssignmentMethods STARTED

kafka.zk.KafkaZkClientTest > testTopicAssignmentMethods PASSED

kafka.zk.KafkaZkClientTest > testPropagateIsrChanges STARTED

kafka.zk.KafkaZkClientTest > testPropagateIsrChanges PASSED

kafka.zk.KafkaZkClientTest > testControllerEpochMethods STARTED

kafka.zk.KafkaZkClientTest > testControllerEpochMethods PASSED

kafka.zk.KafkaZkClientTest > testDeleteRecursive STARTED

kafka.zk.KafkaZkClientTest > testDeleteRecursive PASSED

kafka.zk.KafkaZkClientTest > testGetTopicPartitionStates STARTED

kafka.zk.KafkaZkClientTest > testGetTopicPartitionStates PASSED

kafka.zk.KafkaZkClientTest > testCreateConfigChangeNotification STARTED

kafka.zk.KafkaZkClientTest > testCreateConfigChangeNotification PASSED

kafka.zk.KafkaZkClientTest > testDelegationTokenMethods STARTED

kafka.zk.KafkaZkClientTest > testDelegationTokenMethods PASSED

kafka.zk.LiteralAclStoreTest > shouldHaveCorrectPaths STARTED

kafka.zk.LiteralAclStoreTest > shouldHaveCorrectPaths PASSED

kafka.zk.LiteralAclStoreTest > shouldRoundTripChangeNode STARTED

kafka.zk.LiteralAclStoreTest > shouldRoundTripChangeNode PASSED

kafka.zk.LiteralAclStoreTest > shouldThrowFromEncodeOnNoneLiteral STARTED

kafka.zk.LiteralAclStoreTest > shouldThrowFromEncodeOnNoneLiteral PASSED

kafka.zk.LiteralAclStoreTest > shouldWriteChangesToTheWritePath 

Build failed in Jenkins: kafka-2.2-jdk8 #32

2020-02-26 Thread Apache Jenkins Server
See 


Changes:

[rhauch] KAFKA-9601: Stop logging raw connector config values (#8165)


--
[...truncated 2.74 MB...]

kafka.api.TransactionsTest > testMultipleMarkersOneLeader STARTED

kafka.api.TransactionsTest > testMultipleMarkersOneLeader PASSED

kafka.api.TransactionsTest > testSendOffsets STARTED

kafka.api.TransactionsTest > testSendOffsets PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testTransactionalProducerWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testTransactionalProducerWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testConsumerGroupServiceWithAuthenticationSuccess STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testConsumerGroupServiceWithAuthenticationSuccess PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testProducerWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testProducerWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testConsumerGroupServiceWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testConsumerGroupServiceWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAutoCommitDisabledWithAuthenticationFailure 
STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAutoCommitDisabledWithAuthenticationFailure 
PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testKafkaAdminClientWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testKafkaAdminClientWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testConsumerWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testConsumerWithAuthenticationFailure PASSED

kafka.api.UserClientIdQuotaTest > testProducerConsumerOverrideUnthrottled 
STARTED

kafka.api.UserClientIdQuotaTest > testProducerConsumerOverrideUnthrottled PASSED

kafka.api.UserClientIdQuotaTest > testThrottledProducerConsumer STARTED

kafka.api.UserClientIdQuotaTest > testThrottledProducerConsumer PASSED

kafka.api.UserClientIdQuotaTest > testQuotaOverrideDelete STARTED

kafka.api.UserClientIdQuotaTest > testQuotaOverrideDelete PASSED

kafka.api.UserClientIdQuotaTest > testThrottledRequest STARTED

kafka.api.UserClientIdQuotaTest > testThrottledRequest PASSED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForDataChange 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForDataChange PASSED

kafka.zookeeper.ZooKeeperClientTest > testZooKeeperSessionStateMetric STARTED

kafka.zookeeper.ZooKeeperClientTest > testZooKeeperSessionStateMetric PASSED

kafka.zookeeper.ZooKeeperClientTest > testExceptionInBeforeInitializingSession 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testExceptionInBeforeInitializingSession 
PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testConnection STARTED

kafka.zookeeper.ZooKeeperClientTest > testConnection PASSED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForCreation STARTED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForCreation PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetAclExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetAclExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testSessionExpiryDuringClose STARTED

kafka.zookeeper.ZooKeeperClientTest > testSessionExpiryDuringClose PASSED

kafka.zookeeper.ZooKeeperClientTest > testSetAclNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testSetAclNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testConnectionLossRequestTermination 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testConnectionLossRequestTermination 
PASSED

kafka.zookeeper.ZooKeeperClientTest > testExistsNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testExistsNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetDataNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetDataNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testConnectionTimeout STARTED

kafka.zookeeper.ZooKeeperClientTest > testConnectionTimeout PASSED

kafka.zookeeper.ZooKeeperClientTest > 
testBlockOnRequestCompletionFromStateChangeHandler STARTED

kafka.zookeeper.ZooKeeperClientTest > 
testBlockOnRequestCompletionFromStateChangeHandler PASSED


Build failed in Jenkins: kafka-2.4-jdk8 #156

2020-02-26 Thread Apache Jenkins Server
See 


Changes:

[vvcephei] MINOR: Fix gradle error writing test stdout (#8133)


--
[...truncated 5.57 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 

Build failed in Jenkins: kafka-trunk-jdk8 #4276

2020-02-26 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Remove tag from metric to measure process-rate on source nodes


--
[...truncated 2.88 MB...]

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver PASSED

org.apache.kafka.streams.TestTopicsTest > testValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testValueList PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordList PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testInputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testInputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders STARTED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValue STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValue PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:compileTestJava
> Task :streams:upgrade-system-tests-0101:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:testClasses
> Task :streams:upgrade-system-tests-0101:checkstyleTest
> Task :streams:upgrade-system-tests-0101:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:test
> Task :streams:upgrade-system-tests-0102:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0102:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0102:classes UP-TO-DATE
> Task 

Re: Discussion thread for KIP-X

2020-02-26 Thread Rangan Prabhakaran (BLOOMBERG/ 919 3RD A)
Sounds great Boyang! Look forward to hearing back. 

Rangan 

- Original Message -
From: Boyang Chen 
To: RANGAN PRABHAKARAN, dev@kafka.apache.org
At: 26-Feb-2020 13:45:01


Thanks Rangan for the input! We shall do a retro for Q1. I will reflect your 
use cases there and notify you if we have the next step plan to help out.

Boyang


From: Rangan Prabhakaran (BLOOMBERG/ 919 3RD A) 
Sent: Tuesday, February 25, 2020 2:57 AM
To: dev@kafka.apache.org 
Cc: bche...@outlook.com 
Subject: Re: Discussion thread for KIP-X

Hi Boyang,
The Kafka clusters we manage are multi-tenant clusters hosting anywhere from 
hundreds to a few thousand different workloads on any given cluster.

For our setup, we have noticed that the breaking limit wrt partition count is 
around 10k partitions per broker. Beyond this point, we start seeing 
significant replication slowness, election slowness, issues around too many 
files opened etc

The type of workloads on our clusters that would benefit from the proposal 
outlined in this KIP are

  *   Bursty workloads such as workloads that flood the topic once an hour and 
need to be processed quickly within a strict time window
  *   Workloads that are using topics as simple queues (stateless and don’t 
care about ordering within a partition)
  *   Stream processing workloads where parallelism is driven by the number of 
input topic partitions

Currently, we are over provisioning partitions to efficiently serve these 
workloads which results in significant under-utilization of the respective 
clusters.

Additionally, we are also seeing quite a few workloads that are relying on the 
partition level ordering guarantees today and are filtering out the keys they 
don’t care about on the client side. These workloads would benefit from the key 
level ordering proposed in KIP-X and result in much simpler application logic 
for clients.

Let me know if this helps and if you have any further questions
Rangan

From: dev@kafka.apache.org At: 02/21/20 15:45:49
To: dev@kafka.apache.org
Cc: Rangan Prabhakaran (BLOOMBERG/ 919 3RD A ) 
 , 
bche...@outlook.com
Subject: Re: Discussion thread for KIP-X

Hey Rangan,

thanks for the interest! In fact we are still in the design phase, and need
more supporting use cases that requires a higher scaling factor than number
of partitions. It would be good if you could share some of your needed use
case when the unit time of processing one record is the bottleneck, or some
cost wise concern of over-partitioning.

Boyang

On Fri, Feb 21, 2020 at 10:44 AM Guozhang Wang 
mailto:wangg...@gmail.com>> wrote:

> cc @Boyang Chen mailto:bche...@outlook.com>> who 
> authored this draft.
>
>
> Guozhang
>
> On Fri, Feb 21, 2020 at 10:29 AM Rangan Prabhakaran (BLOOMBERG/ 919 3RD A)
> <
> kprabhaka...@bloomberg.net> wrote:
>
> > Hi,
> > A few of us have been following KIP-X. We are interested in the roadmap /
> > plan there and would like to contribute towards the same.
> >
> > What are the next steps to discuss / iterate on this KIP ? Currently, its
> > in draft state and there does not seem to be a discussion thread attached
> > yet.
> >
> > KIP -
> >
>
https://cwiki.apache.org/confluence/display/KAFKA/KIP-X%3A+Introduce+a+cooperati
ve+consumer+processing+semantic
> >
> > Thanks
> > Rangan
>
>
>
> --
> -- Guozhang
>


Re: [ANNOUNCE] New committer: Konstantine Karantasis

2020-02-26 Thread Boyang Chen
Congrats Konstantine!

On Wed, Feb 26, 2020 at 6:32 PM Manikumar  wrote:

> Congrats Konstantine!


> On Thu, Feb 27, 2020 at 7:46 AM Matthias J. Sax  wrote:
>
> > -BEGIN PGP SIGNED MESSAGE-
> > Hash: SHA512
> >
> > Congrats!
> >
> > On 2/27/20 2:21 AM, Jeremy Custenborder wrote:
> > > Congrats Konstantine!
> > >
> > > On Wed, Feb 26, 2020 at 2:39 PM Bill Bejeck 
> > > wrote:
> > >>
> > >> Congratulations Konstantine! Well deserved.
> > >>
> > >> -Bill
> > >>
> > >> On Wed, Feb 26, 2020 at 5:37 PM Jason Gustafson
> > >>  wrote:
> > >>
> > >>> The PMC for Apache Kafka has invited Konstantine Karantasis as
> > >>> a committer and we are pleased to announce that he has
> > >>> accepted!
> > >>>
> > >>> Konstantine has contributed 56 patches and helped to review
> > >>> even more. His recent work includes a major overhaul of the
> > >>> Connect task management system in order to support incremental
> > >>> rebalancing. In addition to code contributions, Konstantine
> > >>> helps the community in many other ways including talks at
> > >>> meetups and at Kafka Summit and answering questions on
> > >>> stackoverflow. He consistently shows good judgement in design
> > >>> and a careful attention to details when it comes to code.
> > >>>
> > >>> Thanks for all the contributions and looking forward to more!
> > >>>
> > >>> Jason, on behalf of the Apache Kafka PMC
> > >>>
> > -BEGIN PGP SIGNATURE-
> >
> > iQIzBAEBCgAdFiEEI8mthP+5zxXZZdDSO4miYXKq/OgFAl5XJl0ACgkQO4miYXKq
> > /OjEERAAp08DioD903r9aqGCJ3oHbkUi2ZwEr0yeu1veFD3rGFflhni7sm4J87/Z
> > NArUdpHQ7+99YATtm+HY8gHN+eQeVY1+NV4lX77vPNLEONx09aIxbcSnc4Ih7JnX
> > XxcHHBeR6N9EJkMQ82nOUk5cQaDiIdkOvbOv8+NoOcZC6RMy9PKsFgwd8NTMKL8l
> > nqG8KwnV6WYNOJ05BszFSpTwJBJBg8zhZlRcmAF7iRD0cLzM4BReOmEVCcoas7ar
> > RK38okIAALDRkC9JNYpQ/s0si4V+OwP4igp0MAjM+Y2NVLhC6kK6uqNzfeD21M6U
> > mWm7nE9Tbh/K+8hgZbqfprN6vw6+NfU8dwPD0iaEOfisXwbavCfDeonwSWK0BoHo
> > zEeHRGEx7e2FHWp8KyC6XgfFWmkWJP6tCWiTtCFEScxSTzZUC+cG+a5PF1n6hIHo
> > /CH3Oml2ZGDxoEl1zt8Hs5AgKW8X4PQCsfA4LWqA4GgR6PPFPn6g8mX3/AR3wkyn
> > 8Dmlh3k8ZtsW8wX26IYBywm/yyjbnlSzRVSgAHAbpaIqe5PoMG+5fdc1tnO/2Tuf
> > jd1BjbgAD6u5BksmIGBeZXADblQ/qqfp5Q+WRTJSYLItf8HMNAZoqJFp0bRy/QXP
> > 5EZzI9J+ngJG8MYr08UcWQMZt0ytwBVTX/+FVC8Rx5r0D0fqizo=
> > =68IH
> > -END PGP SIGNATURE-
> >
>


Jenkins build is back to normal : kafka-2.5-jdk8 #45

2020-02-26 Thread Apache Jenkins Server
See 




Re: [ANNOUNCE] New committer: Konstantine Karantasis

2020-02-26 Thread Manikumar
Congrats Konstantine!

On Thu, Feb 27, 2020 at 7:46 AM Matthias J. Sax  wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA512
>
> Congrats!
>
> On 2/27/20 2:21 AM, Jeremy Custenborder wrote:
> > Congrats Konstantine!
> >
> > On Wed, Feb 26, 2020 at 2:39 PM Bill Bejeck 
> > wrote:
> >>
> >> Congratulations Konstantine! Well deserved.
> >>
> >> -Bill
> >>
> >> On Wed, Feb 26, 2020 at 5:37 PM Jason Gustafson
> >>  wrote:
> >>
> >>> The PMC for Apache Kafka has invited Konstantine Karantasis as
> >>> a committer and we are pleased to announce that he has
> >>> accepted!
> >>>
> >>> Konstantine has contributed 56 patches and helped to review
> >>> even more. His recent work includes a major overhaul of the
> >>> Connect task management system in order to support incremental
> >>> rebalancing. In addition to code contributions, Konstantine
> >>> helps the community in many other ways including talks at
> >>> meetups and at Kafka Summit and answering questions on
> >>> stackoverflow. He consistently shows good judgement in design
> >>> and a careful attention to details when it comes to code.
> >>>
> >>> Thanks for all the contributions and looking forward to more!
> >>>
> >>> Jason, on behalf of the Apache Kafka PMC
> >>>
> -BEGIN PGP SIGNATURE-
>
> iQIzBAEBCgAdFiEEI8mthP+5zxXZZdDSO4miYXKq/OgFAl5XJl0ACgkQO4miYXKq
> /OjEERAAp08DioD903r9aqGCJ3oHbkUi2ZwEr0yeu1veFD3rGFflhni7sm4J87/Z
> NArUdpHQ7+99YATtm+HY8gHN+eQeVY1+NV4lX77vPNLEONx09aIxbcSnc4Ih7JnX
> XxcHHBeR6N9EJkMQ82nOUk5cQaDiIdkOvbOv8+NoOcZC6RMy9PKsFgwd8NTMKL8l
> nqG8KwnV6WYNOJ05BszFSpTwJBJBg8zhZlRcmAF7iRD0cLzM4BReOmEVCcoas7ar
> RK38okIAALDRkC9JNYpQ/s0si4V+OwP4igp0MAjM+Y2NVLhC6kK6uqNzfeD21M6U
> mWm7nE9Tbh/K+8hgZbqfprN6vw6+NfU8dwPD0iaEOfisXwbavCfDeonwSWK0BoHo
> zEeHRGEx7e2FHWp8KyC6XgfFWmkWJP6tCWiTtCFEScxSTzZUC+cG+a5PF1n6hIHo
> /CH3Oml2ZGDxoEl1zt8Hs5AgKW8X4PQCsfA4LWqA4GgR6PPFPn6g8mX3/AR3wkyn
> 8Dmlh3k8ZtsW8wX26IYBywm/yyjbnlSzRVSgAHAbpaIqe5PoMG+5fdc1tnO/2Tuf
> jd1BjbgAD6u5BksmIGBeZXADblQ/qqfp5Q+WRTJSYLItf8HMNAZoqJFp0bRy/QXP
> 5EZzI9J+ngJG8MYr08UcWQMZt0ytwBVTX/+FVC8Rx5r0D0fqizo=
> =68IH
> -END PGP SIGNATURE-
>


Re: [ANNOUNCE] New committer: Konstantine Karantasis

2020-02-26 Thread Matthias J. Sax
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Congrats!

On 2/27/20 2:21 AM, Jeremy Custenborder wrote:
> Congrats Konstantine!
>
> On Wed, Feb 26, 2020 at 2:39 PM Bill Bejeck 
> wrote:
>>
>> Congratulations Konstantine! Well deserved.
>>
>> -Bill
>>
>> On Wed, Feb 26, 2020 at 5:37 PM Jason Gustafson
>>  wrote:
>>
>>> The PMC for Apache Kafka has invited Konstantine Karantasis as
>>> a committer and we are pleased to announce that he has
>>> accepted!
>>>
>>> Konstantine has contributed 56 patches and helped to review
>>> even more. His recent work includes a major overhaul of the
>>> Connect task management system in order to support incremental
>>> rebalancing. In addition to code contributions, Konstantine
>>> helps the community in many other ways including talks at
>>> meetups and at Kafka Summit and answering questions on
>>> stackoverflow. He consistently shows good judgement in design
>>> and a careful attention to details when it comes to code.
>>>
>>> Thanks for all the contributions and looking forward to more!
>>>
>>> Jason, on behalf of the Apache Kafka PMC
>>>
-BEGIN PGP SIGNATURE-

iQIzBAEBCgAdFiEEI8mthP+5zxXZZdDSO4miYXKq/OgFAl5XJl0ACgkQO4miYXKq
/OjEERAAp08DioD903r9aqGCJ3oHbkUi2ZwEr0yeu1veFD3rGFflhni7sm4J87/Z
NArUdpHQ7+99YATtm+HY8gHN+eQeVY1+NV4lX77vPNLEONx09aIxbcSnc4Ih7JnX
XxcHHBeR6N9EJkMQ82nOUk5cQaDiIdkOvbOv8+NoOcZC6RMy9PKsFgwd8NTMKL8l
nqG8KwnV6WYNOJ05BszFSpTwJBJBg8zhZlRcmAF7iRD0cLzM4BReOmEVCcoas7ar
RK38okIAALDRkC9JNYpQ/s0si4V+OwP4igp0MAjM+Y2NVLhC6kK6uqNzfeD21M6U
mWm7nE9Tbh/K+8hgZbqfprN6vw6+NfU8dwPD0iaEOfisXwbavCfDeonwSWK0BoHo
zEeHRGEx7e2FHWp8KyC6XgfFWmkWJP6tCWiTtCFEScxSTzZUC+cG+a5PF1n6hIHo
/CH3Oml2ZGDxoEl1zt8Hs5AgKW8X4PQCsfA4LWqA4GgR6PPFPn6g8mX3/AR3wkyn
8Dmlh3k8ZtsW8wX26IYBywm/yyjbnlSzRVSgAHAbpaIqe5PoMG+5fdc1tnO/2Tuf
jd1BjbgAD6u5BksmIGBeZXADblQ/qqfp5Q+WRTJSYLItf8HMNAZoqJFp0bRy/QXP
5EZzI9J+ngJG8MYr08UcWQMZt0ytwBVTX/+FVC8Rx5r0D0fqizo=
=68IH
-END PGP SIGNATURE-


Build failed in Jenkins: kafka-1.0-jdk8 #289

2020-02-26 Thread Apache Jenkins Server
See 


Changes:

[rhauch] KAFKA-9601: Stop logging raw connector config values (#8165)


--
[...truncated 376.13 KB...]

kafka.coordinator.transaction.TransactionMarkerRequestCompletionHandlerTest > 
shouldThrowIllegalStateExceptionIfErrorCodeNotAvailableForPid PASSED

kafka.coordinator.transaction.TransactionLogTest > shouldReadWriteMessages 
STARTED

kafka.coordinator.transaction.TransactionLogTest > shouldReadWriteMessages 
PASSED

kafka.coordinator.transaction.TransactionLogTest > 
shouldThrowExceptionWriteInvalidTxn STARTED

kafka.coordinator.transaction.TransactionLogTest > 
shouldThrowExceptionWriteInvalidTxn PASSED

kafka.network.SocketServerTest > testGracefulClose STARTED

kafka.network.SocketServerTest > testGracefulClose PASSED

kafka.network.SocketServerTest > controlThrowable STARTED

kafka.network.SocketServerTest > controlThrowable PASSED

kafka.network.SocketServerTest > testRequestMetricsAfterStop STARTED

kafka.network.SocketServerTest > testRequestMetricsAfterStop PASSED

kafka.network.SocketServerTest > testConnectionIdReuse STARTED

kafka.network.SocketServerTest > testConnectionIdReuse PASSED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
STARTED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
PASSED

kafka.network.SocketServerTest > testProcessorMetricsTags STARTED

kafka.network.SocketServerTest > testProcessorMetricsTags PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIp STARTED

kafka.network.SocketServerTest > testMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > testConnectionId STARTED

kafka.network.SocketServerTest > testConnectionId PASSED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics STARTED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics PASSED

kafka.network.SocketServerTest > testNoOpAction STARTED

kafka.network.SocketServerTest > testNoOpAction PASSED

kafka.network.SocketServerTest > simpleRequest STARTED

kafka.network.SocketServerTest > simpleRequest PASSED

kafka.network.SocketServerTest > closingChannelException STARTED

kafka.network.SocketServerTest > closingChannelException PASSED

kafka.network.SocketServerTest > testIdleConnection STARTED

kafka.network.SocketServerTest > testIdleConnection PASSED

kafka.network.SocketServerTest > 
testClientDisconnectionWithStagedReceivesFullyProcessed STARTED

kafka.network.SocketServerTest > 
testClientDisconnectionWithStagedReceivesFullyProcessed PASSED

kafka.network.SocketServerTest > testMetricCollectionAfterShutdown STARTED

kafka.network.SocketServerTest > testMetricCollectionAfterShutdown PASSED

kafka.network.SocketServerTest > testSessionPrincipal STARTED

kafka.network.SocketServerTest > testSessionPrincipal PASSED

kafka.network.SocketServerTest > configureNewConnectionException STARTED

kafka.network.SocketServerTest > configureNewConnectionException PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIpOverrides STARTED

kafka.network.SocketServerTest > testMaxConnectionsPerIpOverrides PASSED

kafka.network.SocketServerTest > processNewResponseException STARTED

kafka.network.SocketServerTest > processNewResponseException PASSED

kafka.network.SocketServerTest > processCompletedSendException STARTED

kafka.network.SocketServerTest > processCompletedSendException PASSED

kafka.network.SocketServerTest > processDisconnectedException STARTED

kafka.network.SocketServerTest > processDisconnectedException PASSED

kafka.network.SocketServerTest > sendCancelledKeyException STARTED

kafka.network.SocketServerTest > sendCancelledKeyException PASSED

kafka.network.SocketServerTest > processCompletedReceiveException STARTED

kafka.network.SocketServerTest > processCompletedReceiveException PASSED

kafka.network.SocketServerTest > testSocketsCloseOnShutdown STARTED

kafka.network.SocketServerTest > testSocketsCloseOnShutdown PASSED

kafka.network.SocketServerTest > pollException STARTED

kafka.network.SocketServerTest > pollException PASSED

kafka.network.SocketServerTest > testSslSocketServer STARTED

kafka.network.SocketServerTest > testSslSocketServer PASSED

kafka.network.SocketServerTest > tooBigRequestIsRejected STARTED

kafka.network.SocketServerTest > tooBigRequestIsRejected PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer STARTED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > testClusterIdMetric STARTED

kafka.metrics.MetricsTest > 

[jira] [Resolved] (KAFKA-9607) Should not clear partition queue during task close

2020-02-26 Thread Guozhang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-9607.
--
Fix Version/s: 2.6.0
   Resolution: Fixed

> Should not clear partition queue during task close
> --
>
> Key: KAFKA-9607
> URL: https://issues.apache.org/jira/browse/KAFKA-9607
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Major
> Fix For: 2.6.0
>
>
> We detected an issue with a corrupted task failed to revive:
> {code:java}
> [2020-02-25T08:23:38-08:00] 
> (streams-soak-trunk_soak_i-06305ad57801079ae_streamslog) [2020-02-25 
> 16:23:38,137] INFO 
> [stream-soak-test-8f2124ec-8bd0-410a-8e3d-f202a32ab774-StreamThread-1] 
> stream-thread 
> [stream-soak-test-8f2124ec-8bd0-410a-8e3d-f202a32ab774-StreamThread-1] Handle 
> new assignment with:
>         New active tasks: [0_0, 3_1]
>         New standby tasks: []
>         Existing active tasks: [0_0]
>         Existing standby tasks: [] 
> (org.apache.kafka.streams.processor.internals.TaskManager)
> [2020-02-25T08:23:38-08:00] 
> (streams-soak-trunk_soak_i-06305ad57801079ae_streamslog) [2020-02-25 
> 16:23:38,138] INFO 
> [stream-soak-test-8f2124ec-8bd0-410a-8e3d-f202a32ab774-StreamThread-1] 
> [Consumer 
> clientId=stream-soak-test-8f2124ec-8bd0-410a-8e3d-f202a32ab774-StreamThread-1-consumer,
>  groupId=stream-soak-test] Adding newly assigned partitions: 
> k8sName-id-repartition-1 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
> [2020-02-25T08:23:38-08:00] 
> (streams-soak-trunk_soak_i-06305ad57801079ae_streamslog) [2020-02-25 
> 16:23:38,138] INFO 
> [stream-soak-test-8f2124ec-8bd0-410a-8e3d-f202a32ab774-StreamThread-1] 
> stream-thread 
> [stream-soak-test-8f2124ec-8bd0-410a-8e3d-f202a32ab774-StreamThread-1] State 
> transition from RUNNING to PARTITIONS_ASSIGNED 
> (org.apache.kafka.streams.processor.internals.StreamThread)
> [2020-02-25T08:23:39-08:00] 
> (streams-soak-trunk_soak_i-06305ad57801079ae_streamslog) [2020-02-25 
> 16:23:38,419] WARN 
> [stream-soak-test-8f2124ec-8bd0-410a-8e3d-f202a32ab774-StreamThread-1] 
> stream-thread 
> [stream-soak-test-8f2124ec-8bd0-410a-8e3d-f202a32ab774-StreamThread-1] 
> Encountered org.apache.kafka.clients.consumer.OffsetOutOfRangeException 
> fetching records from restore consumer for partitions 
> [stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-49-changelog-1], it 
> is likely that the consumer's position has fallen out of the topic partition 
> offset range because the topic was truncated or compacted on the broker, 
> marking the corresponding tasks as corrupted and re-initializingit later. 
> (org.apache.kafka.streams.processor.internals.StoreChangelogReader)
> [2020-02-25T08:23:38-08:00] 
> (streams-soak-trunk_soak_i-06305ad57801079ae_streamslog) [2020-02-25 
> 16:23:38,139] INFO 
> [stream-soak-test-8f2124ec-8bd0-410a-8e3d-f202a32ab774-StreamThread-1] 
> [Consumer 
> clientId=stream-soak-test-8f2124ec-8bd0-410a-8e3d-f202a32ab774-StreamThread-1-consumer,
>  groupId=stream-soak-test] Setting offset for partition 
> k8sName-id-repartition-1 to the committed offset 
> FetchPosition{offset=3592242, offsetEpoch=Optional.empty, 
> currentLeader=LeaderAndEpoch{leader=Optional[ip-172-31-25-115.us-west-2.compute.internal:9092
>  (id: 1003 rack: null)], epoch=absent}} 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
> [2020-02-25T08:23:39-08:00] 
> (streams-soak-trunk_soak_i-06305ad57801079ae_streamslog) [2020-02-25 
> 16:23:38,463] ERROR 
> [stream-soak-test-8f2124ec-8bd0-410a-8e3d-f202a32ab774-StreamThread-1] 
> stream-thread 
> [stream-soak-test-8f2124ec-8bd0-410a-8e3d-f202a32ab774-StreamThread-1] 
> Encountered the following exception during processing and the thread is going 
> to shut down:  (org.apache.kafka.streams.processor.internals.StreamThread)
> [2020-02-25T08:23:39-08:00] 
> (streams-soak-trunk_soak_i-06305ad57801079ae_streamslog) 
> java.lang.IllegalStateException: Partition k8sName-id-repartition-1 not found.
>         at 
> org.apache.kafka.streams.processor.internals.PartitionGroup.setPartitionTime(PartitionGroup.java:99)
>         at 
> org.apache.kafka.streams.processor.internals.StreamTask.initializeTaskTime(StreamTask.java:651)
>         at 
> org.apache.kafka.streams.processor.internals.StreamTask.initializeMetadata(StreamTask.java:631)
>         at 
> org.apache.kafka.streams.processor.internals.StreamTask.completeRestoration(StreamTask.java:209)
>         at 
> org.apache.kafka.streams.processor.internals.TaskManager.tryToCompleteRestoration(TaskManager.java:270)
>         at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:834)
>         at 
> 

回复:[Vote] KIP-571: Add option to force remove members in StreamsResetter

2020-02-26 Thread feyman2009
Updated with the KIP link: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-571%3A+Add+option+to+force+remove+members+in+StreamsResetter


--
发件人:feyman2009 
发送时间:2020年2月27日(星期四) 09:38
收件人:dev 
主 题:[Vote] KIP-571: Add option to force remove members in StreamsResetter


Hi, all
I would like to start a vote on KIP-571: Add option to force remove members 
in StreamsResetter .

Thanks!
Feyman



回复:[Discuss] KIP-571: Add option to force remove members in StreamsResetter

2020-02-26 Thread feyman2009
Hi, Sophie
Thanks a lot!
I have initiated a vote 

Thanks!
Feyman


--
发件人:Sophie Blee-Goldman 
发送时间:2020年2月27日(星期四) 08:04
收件人:feyman2009 
主 题:Re: [Discuss] KIP-571: Add option to force remove members in StreamsResetter

Hi guys,

Just to clarify, I meant a batch API on the admin not for the StreamsResetter, 
to avoid
extra round trips and a simpler API. But I suppose it might be useful to be 
able to
remove individual (dynamic) members and not the whole group for other use cases
that could then benefit from this as well.

Anyways, I'm fine with the current plan if that makes sense to you. Feel free 
to call
for a vote if the KIP is ready

Cheers,
Sophie
On Mon, Feb 24, 2020 at 4:16 AM feyman2009  wrote:

Hi, Boyang
Thanks! I have updated the KIP :)
If Sophie also thinks it's ok, I will start a vote soon.

Thanks!
Feyman

--
发件人:Boyang Chen 
发送时间:2020年2月24日(星期一) 00:42
收件人:dev 
主 题:Re: [Discuss] KIP-571: Add option to force remove members in StreamsResetter

Hey Feyman,

thanks a lot for the update, the KIP LGTM now. Will let Sophie take a look
again, also a minor API change:
s/setGroupInstanceId/withGroupInstanceId, and similar to setMemberId, as
usually setters are not expected to return an actual object.

Boyang

On Sat, Feb 22, 2020 at 11:05 PM feyman2009  wrote:

> Hi, Boyang
> Thanks for your review, I have updated the KIP page :)
>
> Hi, Sophie
> Thanks for your suggestions!
> 1)  Did you consider an API that just removes *all* remaining members
> from a group?
> We plan to implement the batch removal in StreamsResetter as below:
> 1) adminClient#describeConsumerGroups to get members in each
> group, this part needs no change.
> 2) adminClient#removeMembersFromConsumerGroup to remove all the
> members got from the above call (This involves API change to support the
> dynamic member removal)
> I think your suggestion is feasible but maybe not necessary currently
> since it is a subset of the combination of the above two APIs. Looking at
> the APIs in KafkaAdminClient, the adminClient.deleteXXX always takes a
> collection as the input parameter and the caller does the "query and
> delete" if "delete all" is needed, this leaves more burden on the caller
> side but increases flexibility. Since the KafkaAdminClient's API is still
> evolving, I think it would be reasonable to follow the convention and not
> adding a "removal all members" API.
>
> 2) Thanks to Boyang's correction, broker version >= 2.4 is needed
> since batch members removal is introduced since then(please check KIP-345
> 
>  for
> details).
> If it is used upon the older clusters like 2.3, 
> *UnsupportedVersionException
> *will be thrown.
>
> Thanks!
> Haoran
>
> --
> 发件人:Boyang Chen 
> 发送时间:2020年2月19日(星期三) 11:57
> 收件人:dev 
> 主 题:Re: [Discuss] KIP-571: Add option to force remove members in
> StreamsResetter
>
> Also Feyman, there is one thing I forget which is that the leave group
> change was introduced in 2.4 broker instead of 2.3. Feel free to correct it
> on the KIP.
>
> On Tue, Feb 18, 2020 at 5:44 PM Sophie Blee-Goldman 
> wrote:
>
> > Hey Feyman,
> >
> > Thanks for the KIP! I had two high-level questions:
> >
>
> > It seems like, in the specific case motivating this KIP, we would only ever
> > want to remove *all* the members remaining in the group (and never just a
> > single member at a time). As you mention there is already an admin API to
>
> > remove static members, but we'd still need something new to handle dynamic
> > ones. Did you consider an API that just removes *all* remaining members
> > from a group, rather than requiring the caller to determine and then
> > specify the
> > group.id (static) or member.id (dynamic) for each one? This way we can
> > just
>
> > have a single API exposed that will handle what we need to do regardless of
> > whether static membership is used or not.
> >
>
> > My other question is, will this new option only work for clusters that are
> > on 2.3
> > or higher? Do you have any thoughts about whether it would be possible to
> > implement this feature for older clusters as well, or are we dependent on
> > changes only introduced in 2.3?
> >
> > If so, we should make it absolutely clear what will happen if this used
> > with
> > an older cluster. That is, will the reset tool exit with a clear error
> > message right
> > away, or will it potentially leave the app in a partially reset state?
> >
> > Thanks!
> > Sophie
> >
> > On Tue, Feb 18, 2020 at 4:30 PM Boyang Chen 
> > wrote:
> >
>
> > > Thanks for the update Feyman. The updates look great, except one thing I
> > > would like to be more specific is error cases 

Build failed in Jenkins: kafka-2.1-jdk8 #253

2020-02-26 Thread Apache Jenkins Server
See 


Changes:

[rhauch] KAFKA-9601: Stop logging raw connector config values (#8165)


--
[...truncated 467.34 KB...]

kafka.log.LogValidatorTest > testLogAppendTimeWithoutRecompressionV1 STARTED

kafka.log.LogValidatorTest > testLogAppendTimeWithoutRecompressionV1 PASSED

kafka.log.LogValidatorTest > testLogAppendTimeWithoutRecompressionV2 STARTED

kafka.log.LogValidatorTest > testLogAppendTimeWithoutRecompressionV2 PASSED

kafka.log.LogValidatorTest > testUncompressedBatchWithoutRecordsNotAllowed 
STARTED

kafka.log.LogValidatorTest > testUncompressedBatchWithoutRecordsNotAllowed 
PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV1ToV0NonCompressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV1ToV0NonCompressed PASSED

kafka.log.LogValidatorTest > testAbsoluteOffsetAssignmentNonCompressed STARTED

kafka.log.LogValidatorTest > testAbsoluteOffsetAssignmentNonCompressed PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV1Compressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV1Compressed PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV1ToV0Compressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV1ToV0Compressed PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV2Compressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV2Compressed PASSED

kafka.log.LogValidatorTest > testNonCompressedV1 STARTED

kafka.log.LogValidatorTest > testNonCompressedV1 PASSED

kafka.log.LogValidatorTest > testNonCompressedV2 STARTED

kafka.log.LogValidatorTest > testNonCompressedV2 PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV1ToV2NonCompressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV1ToV2NonCompressed PASSED

kafka.log.LogValidatorTest > testInvalidCreateTimeCompressedV1 STARTED

kafka.log.LogValidatorTest > testInvalidCreateTimeCompressedV1 PASSED

kafka.log.LogValidatorTest > testInvalidCreateTimeCompressedV2 STARTED

kafka.log.LogValidatorTest > testInvalidCreateTimeCompressedV2 PASSED

kafka.log.LogValidatorTest > testRecompressionV1 STARTED

kafka.log.LogValidatorTest > testRecompressionV1 PASSED

kafka.log.LogValidatorTest > testRecompressionV2 STARTED

kafka.log.LogValidatorTest > testRecompressionV2 PASSED

kafka.log.ProducerStateManagerTest > 
testProducerSequenceWithWrapAroundBatchRecord STARTED

kafka.log.ProducerStateManagerTest > 
testProducerSequenceWithWrapAroundBatchRecord PASSED

kafka.log.ProducerStateManagerTest > testCoordinatorFencing STARTED

kafka.log.ProducerStateManagerTest > testCoordinatorFencing PASSED

kafka.log.ProducerStateManagerTest > testTruncate STARTED

kafka.log.ProducerStateManagerTest > testTruncate PASSED

kafka.log.ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile STARTED

kafka.log.ProducerStateManagerTest > testLoadFromTruncatedSnapshotFile PASSED

kafka.log.ProducerStateManagerTest > testRemoveExpiredPidsOnReload STARTED

kafka.log.ProducerStateManagerTest > testRemoveExpiredPidsOnReload PASSED

kafka.log.ProducerStateManagerTest > 
testOutOfSequenceAfterControlRecordEpochBump STARTED

kafka.log.ProducerStateManagerTest > 
testOutOfSequenceAfterControlRecordEpochBump PASSED

kafka.log.ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation 
STARTED

kafka.log.ProducerStateManagerTest > testFirstUnstableOffsetAfterTruncation 
PASSED

kafka.log.ProducerStateManagerTest > testTakeSnapshot STARTED

kafka.log.ProducerStateManagerTest > testTakeSnapshot PASSED

kafka.log.ProducerStateManagerTest > testDeleteSnapshotsBefore STARTED

kafka.log.ProducerStateManagerTest > testDeleteSnapshotsBefore PASSED

kafka.log.ProducerStateManagerTest > 
testNonMatchingTxnFirstOffsetMetadataNotCached STARTED

kafka.log.ProducerStateManagerTest > 
testNonMatchingTxnFirstOffsetMetadataNotCached PASSED

kafka.log.ProducerStateManagerTest > testAppendEmptyControlBatch STARTED

kafka.log.ProducerStateManagerTest > testAppendEmptyControlBatch PASSED

kafka.log.ProducerStateManagerTest > testFirstUnstableOffsetAfterEviction 
STARTED

kafka.log.ProducerStateManagerTest > testFirstUnstableOffsetAfterEviction PASSED

kafka.log.ProducerStateManagerTest > testNoValidationOnFirstEntryWhenLoadingLog 
STARTED

kafka.log.ProducerStateManagerTest > testNoValidationOnFirstEntryWhenLoadingLog 
PASSED

kafka.log.ProducerStateManagerTest > testLoadFromEmptySnapshotFile STARTED

kafka.log.ProducerStateManagerTest > testLoadFromEmptySnapshotFile PASSED

kafka.log.ProducerStateManagerTest > 
testProducersWithOngoingTransactionsDontExpire STARTED

kafka.log.ProducerStateManagerTest > 
testProducersWithOngoingTransactionsDontExpire PASSED

kafka.log.ProducerStateManagerTest > 

[Vote] KIP-571: Add option to force remove members in StreamsResetter

2020-02-26 Thread feyman2009

Hi, all
I would like to start a vote on KIP-571: Add option to force remove members 
in StreamsResetter .

Thanks!
Feyman

Re: [ANNOUNCE] New committer: Konstantine Karantasis

2020-02-26 Thread Jeremy Custenborder
Congrats Konstantine!

On Wed, Feb 26, 2020 at 2:39 PM Bill Bejeck  wrote:
>
> Congratulations Konstantine! Well deserved.
>
> -Bill
>
> On Wed, Feb 26, 2020 at 5:37 PM Jason Gustafson  wrote:
>
> > The PMC for Apache Kafka has invited Konstantine Karantasis as a committer
> > and we
> > are pleased to announce that he has accepted!
> >
> > Konstantine has contributed 56 patches and helped to review even more. His
> > recent work includes a major overhaul of the Connect task management system
> > in order to support incremental rebalancing. In addition to code
> > contributions, Konstantine helps the community in many other ways including
> > talks at meetups and at Kafka Summit and answering questions on
> > stackoverflow. He consistently shows good judgement in design and a careful
> > attention to details when it comes to code.
> >
> > Thanks for all the contributions and looking forward to more!
> >
> > Jason, on behalf of the Apache Kafka PMC
> >


Build failed in Jenkins: kafka-trunk-jdk8 #4275

2020-02-26 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9601: Stop logging raw connector config values (#8165)

[github] KAFKA-9614: Not initialize topology twice in StreamTask (#8173)

[github] KAFKA-9610: do not throw illegal state when remaining partitions are 
not


--
[...truncated 2.88 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task 

Build failed in Jenkins: kafka-trunk-jdk11 #1195

2020-02-26 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9601: Stop logging raw connector config values (#8165)

[github] KAFKA-9614: Not initialize topology twice in StreamTask (#8173)

[github] KAFKA-9610: do not throw illegal state when remaining partitions are 
not


--
[...truncated 2.89 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED


Re: [VOTE] KIP-409: Allow creating under-replicated topics and partitions

2020-02-26 Thread Ryanne Dolan
Hey all, please consider voting for this KIP.  It's really a shame that
topic creation is impossible when clusters are under-provisioned, which is
not uncommon in a dynamic environment like Kubernetes.

Ryanne

On Thu, Feb 6, 2020 at 10:57 AM Mickael Maison 
wrote:

> I have not seen any new feedback nor votes.
> Bumping this thread again
>
> On Mon, Jan 27, 2020 at 3:55 PM Mickael Maison 
> wrote:
> >
> > Hi,
> >
> > We are now at 4 non-binding votes but still no binding votes.
> > I have not seen any outstanding questions in the DISCUSS thread. If
> > you have any feedback, please let me know.
> >
> > Thanks
> >
> >
> > On Thu, Jan 16, 2020 at 2:03 PM M. Manna  wrote:
> > >
> > > MIckael,
> > >
> > >
> > >
> > > On Thu, 16 Jan 2020 at 14:01, Mickael Maison  >
> > > wrote:
> > >
> > > > Hi Manna,
> > > >
> > > > In your example, the topic 'dummy' is not under replicated. It just
> > > > has 1 replica. A topic under replicated is a topic with less ISRs
> than
> > > > replicas.
> > > >
> > > > Having under replicated topics is relatively common in a Kafka
> > > > cluster, it happens everytime is broker is down. However Kafka does
> > > > not permit it to happen at topic creation. Currently at creation,
> > > > Kafka requires to have at least as many brokers as the replication
> > > > factor. This KIP addresses this limitation.
> > > >
> > > > Regarding your 2nd point. When rack awareness is enabled, Kafka tries
> > > > to distribute partitions across racks. When all brokers in a rack are
> > > > down (ie: a zone is offline), you can end up with partitions not well
> > > > distributed even with rack awareness. There are currently no easy way
> > > > to track such partitions so I decided to not attempt addressing this
> > > > issue in this KIP.
> > > >
> > > > I hope that answers your questions.
> > > >
> > >
> > >  It does and I appreciate you taking time and explaining this.
> > >
> > >  +1 (binding) if I haven't already.
> > >
> > > >
> > > >
> > > >
> > > > On Wed, Jan 15, 2020 at 4:10 PM Kamal Chandraprakash
> > > >  wrote:
> > > > >
> > > > > +1 (non-binding). Thanks for the KIP!
> > > > >
> > > > > On Mon, Jan 13, 2020 at 1:58 PM M. Manna 
> wrote:
> > > > >
> > > > > > Hi Mikael,
> > > > > >
> > > > > > Apologies for last minute question, as I just caught up with it.
> > > > Thanks for
> > > > > > your work on the KIP.
> > > > > >
> > > > > > Just trying to get your thoughts on one thing (I might have
> > > > misunderstood
> > > > > > it) - currently it's possible (even though I am strongly against
> it) to
> > > > > > create Kafka topics which are under-replicated; despite all
> brokers
> > > > being
> > > > > > online. This the the output of an intentionally under-replicated
> topic
> > > > > > "dummy" with p=6 and RF=1 (with a 3 node cluster)
> > > > > >
> > > > > >
> > > > > > virtualadmin@kafka-broker-machine-1:/opt/kafka/bin$
> ./kafka-topics.sh
> > > > > > --create --topic dummy --partitions 6 --replication-factor 1
> > > > > > --bootstrap-server localhost:9092
> > > > > > virtualadmin@kafka-broker-machine-1:/opt/kafka/bin$
> ./kafka-topics.sh
> > > > > > --describe --topic dummy  --bootstrap-server localhost:9092
> > > > > > Topic:dummy PartitionCount:6ReplicationFactor:1
> > > > > >
> > > > > >
> > > >
> Configs:compression.type=gzip,min.insync.replicas=2,cleanup.policy=delete,segment.bytes=10485760,max.message.bytes=10642642,retention.bytes=20971520
> > > > > > Topic: dummyPartition: 0Leader: 3
>  Replicas: 3
> > > > > > Isr: 3
> > > > > > Topic: dummyPartition: 1Leader: 1
>  Replicas: 1
> > > > > > Isr: 1
> > > > > > Topic: dummyPartition: 2Leader: 2
>  Replicas: 2
> > > > > > Isr: 2
> > > > > > Topic: dummyPartition: 3Leader: 3
>  Replicas: 3
> > > > > > Isr: 3
> > > > > > Topic: dummyPartition: 4Leader: 1
>  Replicas: 1
> > > > > > Isr: 1
> > > > > > Topic: dummyPartition: 5Leader: 2
>  Replicas: 2
> > > > > > Isr: 2
> > > > > >
> > > > > >  This is with respect to the following statement on your KIP
> (i.e.
> > > > > > under-replicated topic creation is also permitted when none is
> > > > offline):
> > > > > >
> > > > > > *but note that this may already happen (without this KIP) when
> > > > > > > topics/partitions are created while all brokers in a rack are
> offline
> > > > > > (ie:
> > > > > > > an availability zone is offline). Tracking topics/partitions
> not
> > > > > > optimally
> > > > > > > spread across all racks can be tackled in a follow up KIP.  *
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > Did you mean to say that such under-replicated topics (including
> > > > > > human-created ones) will be handled in a separete KIP ?
> > > > > >
> > > > > > Regards,
> > > > > >
> > > > > >
> > > > > > On Mon, 13 Jan 2020 at 10:15, Mickael Maison <
> mickael.mai...@gmail.com
> > > > >
> > > > > > wrote:
> > > > > >
> > > > > > > Hi all.
> > > > > > >
> > > > > > > 

[jira] [Resolved] (KAFKA-9601) Workers log raw connector configs, including values

2020-02-26 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-9601.
--
  Reviewer: Randall Hauch
Resolution: Fixed

Thanks for the fix, [~ChrisEgerton]!

Merged to trunk and cherry-picked to the 2.5, 2.4, 2.3, 2.2, 2.1, 2.0, 1.1, and 
1.0 branches; I didn't go back farther since it's unlikely we will issue 
additional patches for earlier branches.

> Workers log raw connector configs, including values
> ---
>
> Key: KAFKA-9601
> URL: https://issues.apache.org/jira/browse/KAFKA-9601
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Chris Egerton
>Assignee: Chris Egerton
>Priority: Critical
> Fix For: 1.0.3, 1.1.2, 2.0.2, 2.1.2, 2.2.3, 2.5.0, 2.3.2, 2.4.1
>
>
> [This line right 
> here|https://github.com/apache/kafka/blob/5359b2e3bc1cf13a301f32490a6630802afc4974/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConnector.java#L78]
>  logs all configs (key and value) for a connector, which is bad, since it can 
> lead to secrets (db credentials, cloud storage credentials, etc.) being 
> logged in plaintext.
> We can remove this line. Or change it to just log config keys. Or try to do 
> some super-fancy parsing that masks sensitive values. Well, hopefully not 
> that. That sounds like a lot of work.
> Affects all versions of Connect back through 0.10.1.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [KAFKA-557] Add emit on change support for Kafka Streams

2020-02-26 Thread John Roesler
Hi Richard,

I've been making a final pass over the KIP.

Re: Proposed Behavior Change:

I think this point is controversial and probably doesn't need to be there at 
all:
> 2.b. In certain situations where there is a high volume of idempotent
> updates throughout the Streams DAG, it will be recommended practice
> to materialize all operations to reduce traffic overall across the entire
>  network of nodes.

Re-reading all the points, it seems like we can sum them up in a way that's
a little more straight to the point, and gives us the right amount of 
flexibility:

> Proposed Behavior Changes
> 
> Definition: "idempotent update" is one in which the new result and prior
> result,  when serialized, are identical byte arrays
> 
> Note: an "update" is a concept that only applies to Table operations, so
> the concept of an "idempotent update" also only applies to Table operations.
> See 
> https://kafka.apache.org/documentation/streams/developer-guide/dsl-api.html#streams_concepts_ktable
> for more information.
> 
> Given that definition, we propose for Streams to drop idempotent updates
> in any situation where it's possible and convenient to do so. For example,
> any time we already have both the prior and new results serialized, we 
> may compare them, and drop the update if it is idempotent.
> 
> Note that under this proposal, we can implement idempotence checking
> in the following situations:
> 1. Any aggregation (for example, KGroupedStream, KGroupedTable,
> TimeWindowedKStream, and SessionWindowedKStream operations)
> 2. Any Materialized KTable operation
> 3. Repartition operations, when we need to send both prior and new results

Notice that in my proposed wording, we neither limit ourselves to just the
situations enumerated, nor promise to implement the optimization in every
possible situation. IMHO, this is the best way to propose such a feature.
That way, we have the flexibility to implement it in stages, and also to add
on to the implementation in the future.


Re: Metrics

I agree with Bruno, although, I think it might just be a confusing statement.
It might be clearer to drop all the "discussion", and just say: "We will add a
metric to count the number of idempotent updates that we have dropped".

Also, with respect to the metric, I'm wondering if the metric should be task-
level or processor-node-level. Since the interesting action takes place inside
individual processor nodes, I _think_ it would be higher leverage to just
measure it at the node level. WDYT?

Re: Design Reasoning

This section seems to be a little bit outdated. I also just noticed a "surprise"
configuration "timestamp.aggregation.selection.policy" hidden in point 1.a.
Is that part of the proposal? We haven't discussed it, and I think we were
talking about this KIP being "configuration free".

There is also some discussion of discarded alternative in the Design Reasoning
section, which is confusing. Finally, there was a point there I didn't 
understand
at all, about stateless operators not being intended to load prior results.
This statement doesn't seem to be true, but it also doesn't seem to be relevant,
so maybe we can just drop it.

Overall, it might help if you make a pass on this section, and just discuss as
briefly as possible the justification for the proposed behavior change, and
not adding a configuration. Try to avoid talking about things that we are not
proposing, since that will just lead to confusion.

Similarly, I'd just completely remove the "Implementation [discarded]" section.
It was good to have this as part of the discussion initially, but as we move
toward a vote, it's better to just streamline the KIP document as much as
possible. Keeping a "discarded" section in the document will just make it
harder for new people to understand the proposal. We did the same thing
with KIP-441, where there were two prior drafts included at the end of the
document, and we just deleted them for clarity.

I liked the "Compatibility" and "Rejected Alternatives" section. Very clear
and to the point.

Thanks again for the contribution! I think once the KIP document is cleaned
up, we'll be in good shape to finalize the discussion.
-John


On Wed, Feb 26, 2020, at 07:27, Bruno Cadonna wrote:
> Hi Richard,
> 
> 1. Could you change "idempotent update operations will only be dropped
> from KTables, not from other classes." -> idempotent update operations
> will only be dropped from materialized KTables? For non-materialized
> KTables -- as they can occur after optimization of the topology -- we
> cannot drop idempotent updates.
> 
> 2. I cannot completely follow the metrics section. Do you want to
> record all idempotent updates or only the dropped ones? In particular,
> I do not understand the following sentences:
> "For that matter, even if we don't drop idempotent updates, we should
> at the very least record the number of idempotent updates that has
> been seen go through a particular processor."
> "Therefore, we should add 

Build failed in Jenkins: kafka-2.2-jdk8-old #214

2020-02-26 Thread Apache Jenkins Server
See 


Changes:

[rhauch] KAFKA-9601: Stop logging raw connector config values (#8165)


--
Started by an SCM change
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on H27 (ubuntu) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/2.2^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/2.2^{commit} # timeout=10
Checking out Revision 5b01f63a42b30bd41f91a1ab1334161cb8d3ff28 
(refs/remotes/origin/2.2)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 5b01f63a42b30bd41f91a1ab1334161cb8d3ff28
Commit message: "KAFKA-9601: Stop logging raw connector config values (#8165)"
 > git rev-list --no-walk 7bab36dc97c30043fbbd09174e056ec7ed8e4f43 # timeout=10
ERROR: No tool found matching GRADLE_4_8_1_HOME
[kafka-2.2-jdk8-old] $ /bin/bash -xe /tmp/jenkins1406355423041979990.sh
+ rm -rf 
+ /bin/gradle
/tmp/jenkins1406355423041979990.sh: line 4: /bin/gradle: No such file or 
directory
Build step 'Execute shell' marked build as failure
[FINDBUGS] Collecting findbugs analysis files...
ERROR: No tool found matching GRADLE_4_8_1_HOME
[FINDBUGS] Searching for all files in 
 that match the pattern 
**/build/reports/findbugs/*.xml
[FINDBUGS] No files found. Configuration error?
ERROR: No tool found matching GRADLE_4_8_1_HOME
No credentials specified
ERROR: No tool found matching GRADLE_4_8_1_HOME
 Using GitBlamer to create author and commit information for all 
warnings.
 GIT_COMMIT=5b01f63a42b30bd41f91a1ab1334161cb8d3ff28, 
workspace=
[FINDBUGS] Computing warning deltas based on reference build #175
Recording test results
ERROR: No tool found matching GRADLE_4_8_1_HOME
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
ERROR: No tool found matching GRADLE_4_8_1_HOME
Not sending mail to unregistered user nore...@github.com
Not sending mail to unregistered user ism...@juma.me.uk
Not sending mail to unregistered user b...@confluent.io
Not sending mail to unregistered user wangg...@gmail.com
Not sending mail to unregistered user j...@confluent.io


Re: [ANNOUNCE] New committer: Konstantine Karantasis

2020-02-26 Thread Bill Bejeck
Congratulations Konstantine! Well deserved.

-Bill

On Wed, Feb 26, 2020 at 5:37 PM Jason Gustafson  wrote:

> The PMC for Apache Kafka has invited Konstantine Karantasis as a committer
> and we
> are pleased to announce that he has accepted!
>
> Konstantine has contributed 56 patches and helped to review even more. His
> recent work includes a major overhaul of the Connect task management system
> in order to support incremental rebalancing. In addition to code
> contributions, Konstantine helps the community in many other ways including
> talks at meetups and at Kafka Summit and answering questions on
> stackoverflow. He consistently shows good judgement in design and a careful
> attention to details when it comes to code.
>
> Thanks for all the contributions and looking forward to more!
>
> Jason, on behalf of the Apache Kafka PMC
>


[ANNOUNCE] New committer: Konstantine Karantasis

2020-02-26 Thread Jason Gustafson
The PMC for Apache Kafka has invited Konstantine Karantasis as a committer
and we
are pleased to announce that he has accepted!

Konstantine has contributed 56 patches and helped to review even more. His
recent work includes a major overhaul of the Connect task management system
in order to support incremental rebalancing. In addition to code
contributions, Konstantine helps the community in many other ways including
talks at meetups and at Kafka Summit and answering questions on
stackoverflow. He consistently shows good judgement in design and a careful
attention to details when it comes to code.

Thanks for all the contributions and looking forward to more!

Jason, on behalf of the Apache Kafka PMC


Re: [DISCUSS] Apache Kafka 2.5.0 release

2020-02-26 Thread Randall Hauch
Thanks, David. The PR has been merged to trunk and 2.5, and I'm backporting
to earlier branches. I'll resolve
https://issues.apache.org/jira/browse/KAFKA-9601 when I finish backporting.

On Wed, Feb 26, 2020 at 1:28 PM David Arthur  wrote:

> Thanks, Randall. Leaking sensitive config to the logs seems fairly
> severe. I think should include this. Let's proceed with cherry-picking to
> 2.5.
>
> -David
>
> On Wed, Feb 26, 2020 at 2:25 PM Randall Hauch  wrote:
>
> > Hi, David.
> >
> > If we're still not quite ready for an RC, I'd like to squeeze in
> > https://issues.apache.org/jira/browse/KAFKA-9601, which removes the raw
> > connector config properties in a DEBUG level log message. PR is ready
> (test
> > failures are unrelated), the risk is very low, and I think it'd be great
> to
> > correct this sooner than later.
> >
> > Randall
> >
> > On Wed, Feb 26, 2020 at 11:26 AM David Arthur  wrote:
> >
> > > Viktor, the change LGTM. I've approved and merged the cherry-pick
> version
> > > into 2.5.
> > >
> > > Thanks!
> > > David
> > >
> > > On Tue, Feb 25, 2020 at 4:43 AM Viktor Somogyi-Vass <
> > > viktorsomo...@gmail.com>
> > > wrote:
> > >
> > > > Hi David,
> > > >
> > > > There are two short JIRAs related to KIP-352 that documents the newly
> > > added
> > > > metrics. Is it possible to merge them in?
> > > > https://github.com/apache/kafka/pull/7434 (trunk)
> > > > https://github.com/apache/kafka/pull/8127 (2.5 cherry-pick)
> > > >
> > > > Thanks,
> > > > Viktor
> > > >
> > > >
> > > > On Mon, Feb 24, 2020 at 7:22 PM David Arthur 
> wrote:
> > > >
> > > > > Thanks, Tu. I've moved KIP-467 out of the release plan.
> > > > >
> > > > > -David
> > > > >
> > > > > On Thu, Feb 20, 2020 at 6:00 PM Tu Tran  wrote:
> > > > >
> > > > > > Hi David,
> > > > > >
> > > > > > Thanks for being the release main driver. Since the
> implementation
> > > for
> > > > > the
> > > > > > last part of KIP-467 wasn't finalized prior to Feb 12th, could
> you
> > > > remove
> > > > > > KIP-467 from the list?
> > > > > >
> > > > > > Thanks,
> > > > > > Tu
> > > > > >
> > > > > > On Thu, Feb 20, 2020 at 7:18 AM David Arthur 
> > > wrote:
> > > > > >
> > > > > > > Randall / Konstantine,
> > > > > > >
> > > > > > > Sorry for the late reply. Thanks for the fix and for the
> update!
> > I
> > > > see
> > > > > > this
> > > > > > > change on the 2.5 branch (@b403c66). Consider this a
> retroactive
> > > > > approval
> > > > > > > for this bugfix :)
> > > > > > >
> > > > > > > -David
> > > > > > >
> > > > > > > On Fri, Feb 14, 2020 at 2:21 PM Konstantine Karantasis <
> > > > > > > konstant...@confluent.io> wrote:
> > > > > > >
> > > > > > > > Hi David,
> > > > > > > >
> > > > > > > > I want to confirm what Randall mentions above. The code fixes
> > for
> > > > > > > > KAFKA-9556 were in place before code freeze on Wed, but we
> > spent
> > > a
> > > > > bit
> > > > > > > more
> > > > > > > > time hardening the conditions of the integration tests and
> > fixing
> > > > > some
> > > > > > > > jenkins branch builders to run the test on repeat.
> > > > > > > >
> > > > > > > > Best,
> > > > > > > > Konstantine
> > > > > > > >
> > > > > > > >
> > > > > > > > On Fri, Feb 14, 2020 at 7:42 AM Randall Hauch <
> > rha...@gmail.com>
> > > > > > wrote:
> > > > > > > >
> > > > > > > > > Hi, David.
> > > > > > > > >
> > > > > > > > > I just filed
> > https://issues.apache.org/jira/browse/KAFKA-9556
> > > > that
> > > > > > > > > identifies two pretty minor issues with the new KIP-558
> that
> > > adds
> > > > > new
> > > > > > > > > Connect REST API endpoints to get the list of topics used
> by
> > a
> > > > > > > connector.
> > > > > > > > > The impact is high: the feature cannot be fully disabled,
> and
> > > > > Connect
> > > > > > > > does
> > > > > > > > > not automatically reset the topic set when a connector is
> > > > deleted.
> > > > > > > > > https://github.com/apache/kafka/pull/8085 includes the two
> > > > fixes,
> > > > > > and
> > > > > > > > also
> > > > > > > > > adds more unit and integration tests for this feature.
> > > Although I
> > > > > > just
> > > > > > > > > created the blocker this AM, Konstantine has actually be
> > > working
> > > > on
> > > > > > the
> > > > > > > > fix
> > > > > > > > > for four days. Risk of merging this PR is low, since a) the
> > new
> > > > > > > > integration
> > > > > > > > > tests add significant coverage and we've run the new tests
> > > > numerous
> > > > > > > > times,
> > > > > > > > > and b) the fixes help gate the new feature even more and
> > allow
> > > > the
> > > > > > > > feature
> > > > > > > > > to be completely disabled.
> > > > > > > > >
> > > > > > > > > I'd like approve to merge
> > > > > https://github.com/apache/kafka/pull/8085
> > > > > > > > >
> > > > > > > > > Thanks!
> > > > > > > > > Randall
> > > > > > > > >
> > > > > > > > > On Mon, Feb 10, 2020 at 11:31 AM David Arthur <
> > > mum...@gmail.com>
> > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > > Just a friendly 

Re: [DISCUSS] KIP-569: DescribeConfigsResponse - Update the schema to include datatype of the field

2020-02-26 Thread Colin McCabe
Hi Shailesh,

I think most users of the DescribeConfigs API do not want to get help text or 
configuration schema information.  So, it would be inefficient to always 
include this information as part of the DescribeConfigs response.  It would be 
better to create a new, separate API for getting this configuration schema 
information, if that's what we really want.  This API should probably also 
allow configuration management systems to list all the possible configurations 
for topics, brokers, etc., which is something that I think many of them would 
want.

We also need to consider compatibility.  One example is, what if we later add a 
new type of configuration key, such as UUID.  What would the hypothetical 
DescribeConfigurationSchema API return in this case, for older clients?  We 
probably need an UNKNOWN enum value to be used to indicate that the server 
knows about configuration key types that the client does not.

best,
Colin


On Wed, Feb 19, 2020, at 09:13, Shailesh Panwar wrote:
> Bump.
> 
> Thanks
> Shailesh
> 
> On Tue, Feb 11, 2020 at 1:00 PM Shailesh Panwar 
> wrote:
> 
> > Hi all,
> > We would like to extend the DescribeConfigsResponse schema to include the
> > data-type of the fields.
> >
> > The KIP can be found here:
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-569%3A+DescribeConfigsResponse+-+Update+the+schema+to+include+datatype+of+the+field
> >
> > Thanks
> > Shailesh
> >
>


[jira] [Resolved] (KAFKA-9610) Should not throw illegal state exception during task revocation

2020-02-26 Thread Guozhang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-9610.
--
Fix Version/s: 2.6.0
   Resolution: Fixed

> Should not throw illegal state exception during task revocation
> ---
>
> Key: KAFKA-9610
> URL: https://issues.apache.org/jira/browse/KAFKA-9610
> Project: Kafka
>  Issue Type: Bug
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Major
> Fix For: 2.6.0
>
>
> In handleRevocation call, the remaining partitions could cause an illegal 
> state exception on task revocation. This should also be fixed as it is 
> expected when the tasks are cleared from the assignor onAssignment callback.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9614) Avoid initializing the topology twice when resuming stream tasks from suspended state

2020-02-26 Thread Guozhang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-9614.
--
Fix Version/s: 2.6.0
   Resolution: Fixed

> Avoid initializing the topology twice when resuming stream tasks from 
> suspended state
> -
>
> Key: KAFKA-9614
> URL: https://issues.apache.org/jira/browse/KAFKA-9614
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>Priority: Major
> Fix For: 2.6.0
>
>
> When resuming from a suspended stream task, today it first transit to 
> restoring and then to running. The main motivation is to simplify the state 
> transition FSM of stream tasks. We should only call `initializeTopology` when 
> transiting from restoring to running, but the tech debt cleanup missed one 
> gap that beforehand we transit to running directly and hence also call 
> `initializeTopology` at the first transition.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-568: Explicit rebalance triggering on the Consumer

2020-02-26 Thread Sophie Blee-Goldman
Hi all,

Apologies for the back and forth on this issue, but while implementing the
new
API I've come around to thinking we should not return anything after all,
and just
leave it up to the user to determine if they need to retry the rebalance.

Since I was the main (only) strong proponent of this aspect, I'm assuming
this will be fine with everyone -- but please do raise any
concerns/questions if you
have them.

Thanks!
Sophie

On Fri, Feb 14, 2020 at 11:26 AM Sophie Blee-Goldman 
wrote:

> Thanks all!
>
> This KIP passes with 3 binding votes (John, Bill, and Guozhang) and
> 3 non-binding votes (Navinder, Konstantine, Boyang, and Bruno).
>
> I'll call for review on a PR shortly.
>
> Sophie
>
>
>
> On Fri, Feb 14, 2020 at 12:25 AM Bruno Cadonna  wrote:
>
>> Thanks!
>>
>> +1 (non-binding)
>>
>> Best,
>> Bruno
>>
>> On Fri, Feb 14, 2020 at 1:57 AM Boyang Chen 
>> wrote:
>> >
>> > +1 (non-binding)
>> >
>> > On Thu, Feb 13, 2020 at 4:45 PM Guozhang Wang 
>> wrote:
>> >
>> > > +1 (binding).
>> > >
>> > >
>> > > Guozhang
>> > >
>> > > On Tue, Feb 11, 2020 at 5:29 PM Guozhang Wang 
>> wrote:
>> > >
>> > > > Hi Sophie,
>> > > >
>> > > > Thanks for the KIP, I left some comments on the DISCUSS thread.
>> > > >
>> > > >
>> > > > Guozhang
>> > > >
>> > > >
>> > > > On Tue, Feb 11, 2020 at 3:25 PM Bill Bejeck 
>> wrote:
>> > > >
>> > > >> Thanks for the KIP Sophie.
>> > > >>
>> > > >> It's a +1 (binding) for me.
>> > > >>
>> > > >> -Bill
>> > > >>
>> > > >> On Tue, Feb 11, 2020 at 4:21 PM Konstantine Karantasis <
>> > > >> konstant...@confluent.io> wrote:
>> > > >>
>> > > >> > The KIP reads quite well for me now and I think this feature will
>> > > enable
>> > > >> > even more efficient load balancing for specific use cases.
>> > > >> >
>> > > >> > I'm also +1 (non-binding)
>> > > >> >
>> > > >> > - Konstantine
>> > > >> >
>> > > >> > On Tue, Feb 11, 2020 at 9:35 AM Navinder Brar
>> > > >> >  wrote:
>> > > >> >
>> > > >> > > Thanks Sophie, much required.
>> > > >> > > +1 non-binding.
>> > > >> > >
>> > > >> > >
>> > > >> > > Sent from Yahoo Mail for iPhone
>> > > >> > >
>> > > >> > >
>> > > >> > > On Tuesday, February 11, 2020, 10:33 PM, John Roesler <
>> > > >> > vvcep...@apache.org>
>> > > >> > > wrote:
>> > > >> > >
>> > > >> > > Thanks Sophie,
>> > > >> > >
>> > > >> > > I'm +1 (binding)
>> > > >> > >
>> > > >> > > -John
>> > > >> > >
>> > > >> > > On Mon, Feb 10, 2020, at 20:54, Sophie Blee-Goldman wrote:
>> > > >> > > > Hey all,
>> > > >> > > >
>> > > >> > > > I'd like to start the voting on KIP-568. It proposes the new
>> > > >> > > > Consumer#enforceRebalance API to facilitate triggering
>> efficient
>> > > >> > > rebalances.
>> > > >> > > >
>> > > >> > > > For reference, here is the KIP link again:
>> > > >> > > >
>> > > >> > >
>> > > >> >
>> > > >>
>> > >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-568%3A+Explicit+rebalance+triggering+on+the+Consumer
>> > > >> > > >
>> > > >> > > > Thanks!
>> > > >> > > > Sophie
>> > > >> > > >
>> > > >> > >
>> > > >> > >
>> > > >> > >
>> > > >> > >
>> > > >> >
>> > > >>
>> > > >
>> > > >
>> > > > --
>> > > > -- Guozhang
>> > > >
>> > >
>> > >
>> > > --
>> > > -- Guozhang
>> > >
>>
>


[jira] [Created] (KAFKA-9615) Refactor TaskManager to extract task creation / cleanup out of StreamThread

2020-02-26 Thread Guozhang Wang (Jira)
Guozhang Wang created KAFKA-9615:


 Summary: Refactor TaskManager to extract task creation / cleanup 
out of StreamThread
 Key: KAFKA-9615
 URL: https://issues.apache.org/jira/browse/KAFKA-9615
 Project: Kafka
  Issue Type: Improvement
Reporter: Guozhang Wang
Assignee: John Roesler


We have a TODO marker for moving the task-creators into the task-manager as a 
follow-up of the tech cleanup, and here are some rationales:

1. right now the reason we keep the task-creators in stream-thread is to be 
able to mock task creation in task-manager tests, but that should be better 
achieved with easy mocks on interfaces than this.

2. the thread only need the thread-producer for a) metrics exposure and b) 
thread-metadata's producer-client ids. Both of them can be exposed from the 
task-manager instead of thread-producer.

So the idea is that we let the task-manager to abstract / mock the 
task-creation and manage the producer(s) internally as well, and expose to 
stream-thread only the metrics / client-ids information upon request. With that 
it makes more sense to let the creator be the closer of the producer(s) --- 
i.e. the task-manager, not the stream-producer.

And we can fix a couple "FIXMEs" as well along with this effort.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9614) Avoid initializing the topology twice when resuming stream tasks from suspended state

2020-02-26 Thread Guozhang Wang (Jira)
Guozhang Wang created KAFKA-9614:


 Summary: Avoid initializing the topology twice when resuming 
stream tasks from suspended state
 Key: KAFKA-9614
 URL: https://issues.apache.org/jira/browse/KAFKA-9614
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Guozhang Wang
Assignee: Guozhang Wang


When resuming from a suspended stream task, today it first transit to restoring 
and then to running. The main motivation is to simplify the state transition 
FSM of stream tasks. We should only call `initializeTopology` when transiting 
from restoring to running, but the tech debt cleanup missed one gap that 
beforehand we transit to running directly and hence also call 
`initializeTopology` at the first transition.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9613) orruptRecordException: Found record size 0 smaller than minimum record overhead

2020-02-26 Thread Amit Khandelwal (Jira)
Amit Khandelwal created KAFKA-9613:
--

 Summary: orruptRecordException: Found record size 0 smaller than 
minimum record overhead
 Key: KAFKA-9613
 URL: https://issues.apache.org/jira/browse/KAFKA-9613
 Project: Kafka
  Issue Type: Bug
Reporter: Amit Khandelwal


20200224;21:01:38: [2020-02-24 21:01:38,615] ERROR [ReplicaManager broker=0] 
Error processing fetch with max size 1048576 from consumer on partition 
SANDBOX.BROKER.NEWORDER-0: (fetchOffset=211886, logStartOffset=-1, 
maxBytes=1048576, currentLeaderEpoch=Optional.empty) 
(kafka.server.ReplicaManager)

20200224;21:01:38: org.apache.kafka.common.errors.CorruptRecordException: Found 
record size 0 smaller than minimum record overhead (14) in file 
/data/tmp/kafka-topic-logs/SANDBOX.BROKER.NEWORDER-0/.log.

20200224;21:05:48: [2020-02-24 21:05:48,711] INFO [GroupMetadataManager 
brokerId=0] Removed 0 expired offsets in 1 milliseconds. 
(kafka.coordinator.group.GroupMetadataManager)

20200224;21:10:22: [2020-02-24 21:10:22,204] INFO [GroupCoordinator 0]: Member 
_011-9e61d2c9-ce5a-4231-bda1-f04e6c260dc0-StreamThread-1-consumer-27768816-ee87-498f-8896-191912282d4f
 in group y_011 has failed, removing it from the group 
(kafka.coordinator.group.GroupCoordinator)

 

[https://stackoverflow.com/questions/60404510/kafka-broker-issue-replica-manager-with-max-size#]

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] Apache Kafka 2.5.0 release

2020-02-26 Thread David Arthur
Thanks, Randall. Leaking sensitive config to the logs seems fairly
severe. I think should include this. Let's proceed with cherry-picking to
2.5.

-David

On Wed, Feb 26, 2020 at 2:25 PM Randall Hauch  wrote:

> Hi, David.
>
> If we're still not quite ready for an RC, I'd like to squeeze in
> https://issues.apache.org/jira/browse/KAFKA-9601, which removes the raw
> connector config properties in a DEBUG level log message. PR is ready (test
> failures are unrelated), the risk is very low, and I think it'd be great to
> correct this sooner than later.
>
> Randall
>
> On Wed, Feb 26, 2020 at 11:26 AM David Arthur  wrote:
>
> > Viktor, the change LGTM. I've approved and merged the cherry-pick version
> > into 2.5.
> >
> > Thanks!
> > David
> >
> > On Tue, Feb 25, 2020 at 4:43 AM Viktor Somogyi-Vass <
> > viktorsomo...@gmail.com>
> > wrote:
> >
> > > Hi David,
> > >
> > > There are two short JIRAs related to KIP-352 that documents the newly
> > added
> > > metrics. Is it possible to merge them in?
> > > https://github.com/apache/kafka/pull/7434 (trunk)
> > > https://github.com/apache/kafka/pull/8127 (2.5 cherry-pick)
> > >
> > > Thanks,
> > > Viktor
> > >
> > >
> > > On Mon, Feb 24, 2020 at 7:22 PM David Arthur  wrote:
> > >
> > > > Thanks, Tu. I've moved KIP-467 out of the release plan.
> > > >
> > > > -David
> > > >
> > > > On Thu, Feb 20, 2020 at 6:00 PM Tu Tran  wrote:
> > > >
> > > > > Hi David,
> > > > >
> > > > > Thanks for being the release main driver. Since the implementation
> > for
> > > > the
> > > > > last part of KIP-467 wasn't finalized prior to Feb 12th, could you
> > > remove
> > > > > KIP-467 from the list?
> > > > >
> > > > > Thanks,
> > > > > Tu
> > > > >
> > > > > On Thu, Feb 20, 2020 at 7:18 AM David Arthur 
> > wrote:
> > > > >
> > > > > > Randall / Konstantine,
> > > > > >
> > > > > > Sorry for the late reply. Thanks for the fix and for the update!
> I
> > > see
> > > > > this
> > > > > > change on the 2.5 branch (@b403c66). Consider this a retroactive
> > > > approval
> > > > > > for this bugfix :)
> > > > > >
> > > > > > -David
> > > > > >
> > > > > > On Fri, Feb 14, 2020 at 2:21 PM Konstantine Karantasis <
> > > > > > konstant...@confluent.io> wrote:
> > > > > >
> > > > > > > Hi David,
> > > > > > >
> > > > > > > I want to confirm what Randall mentions above. The code fixes
> for
> > > > > > > KAFKA-9556 were in place before code freeze on Wed, but we
> spent
> > a
> > > > bit
> > > > > > more
> > > > > > > time hardening the conditions of the integration tests and
> fixing
> > > > some
> > > > > > > jenkins branch builders to run the test on repeat.
> > > > > > >
> > > > > > > Best,
> > > > > > > Konstantine
> > > > > > >
> > > > > > >
> > > > > > > On Fri, Feb 14, 2020 at 7:42 AM Randall Hauch <
> rha...@gmail.com>
> > > > > wrote:
> > > > > > >
> > > > > > > > Hi, David.
> > > > > > > >
> > > > > > > > I just filed
> https://issues.apache.org/jira/browse/KAFKA-9556
> > > that
> > > > > > > > identifies two pretty minor issues with the new KIP-558 that
> > adds
> > > > new
> > > > > > > > Connect REST API endpoints to get the list of topics used by
> a
> > > > > > connector.
> > > > > > > > The impact is high: the feature cannot be fully disabled, and
> > > > Connect
> > > > > > > does
> > > > > > > > not automatically reset the topic set when a connector is
> > > deleted.
> > > > > > > > https://github.com/apache/kafka/pull/8085 includes the two
> > > fixes,
> > > > > and
> > > > > > > also
> > > > > > > > adds more unit and integration tests for this feature.
> > Although I
> > > > > just
> > > > > > > > created the blocker this AM, Konstantine has actually be
> > working
> > > on
> > > > > the
> > > > > > > fix
> > > > > > > > for four days. Risk of merging this PR is low, since a) the
> new
> > > > > > > integration
> > > > > > > > tests add significant coverage and we've run the new tests
> > > numerous
> > > > > > > times,
> > > > > > > > and b) the fixes help gate the new feature even more and
> allow
> > > the
> > > > > > > feature
> > > > > > > > to be completely disabled.
> > > > > > > >
> > > > > > > > I'd like approve to merge
> > > > https://github.com/apache/kafka/pull/8085
> > > > > > > >
> > > > > > > > Thanks!
> > > > > > > > Randall
> > > > > > > >
> > > > > > > > On Mon, Feb 10, 2020 at 11:31 AM David Arthur <
> > mum...@gmail.com>
> > > > > > wrote:
> > > > > > > >
> > > > > > > > > Just a friendly reminder that this Wednesday, February
> 12th,
> > is
> > > > the
> > > > > > > code
> > > > > > > > > freeze for the 2.5.0 release. After this time we will only
> > > accept
> > > > > > > blocker
> > > > > > > > > bugs onto the release branch.
> > > > > > > > >
> > > > > > > > > Thanks!
> > > > > > > > > David
> > > > > > > > >
> > > > > > > > > On Fri, Jan 31, 2020 at 5:13 PM David Arthur <
> > mum...@gmail.com
> > > >
> > > > > > wrote:
> > > > > > > > >
> > > > > > > > > > Thanks! I've updated the list.
> > > > > > > > > >
> > > > > > > > > > On Thu, Jan 30, 2020 

Build failed in Jenkins: kafka-trunk-jdk11 #1194

2020-02-26 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Fix gradle error writing test stdout (#8133)


--
[...truncated 2.89 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 

Re: [DISCUSS] Apache Kafka 2.5.0 release

2020-02-26 Thread Randall Hauch
Hi, David.

If we're still not quite ready for an RC, I'd like to squeeze in
https://issues.apache.org/jira/browse/KAFKA-9601, which removes the raw
connector config properties in a DEBUG level log message. PR is ready (test
failures are unrelated), the risk is very low, and I think it'd be great to
correct this sooner than later.

Randall

On Wed, Feb 26, 2020 at 11:26 AM David Arthur  wrote:

> Viktor, the change LGTM. I've approved and merged the cherry-pick version
> into 2.5.
>
> Thanks!
> David
>
> On Tue, Feb 25, 2020 at 4:43 AM Viktor Somogyi-Vass <
> viktorsomo...@gmail.com>
> wrote:
>
> > Hi David,
> >
> > There are two short JIRAs related to KIP-352 that documents the newly
> added
> > metrics. Is it possible to merge them in?
> > https://github.com/apache/kafka/pull/7434 (trunk)
> > https://github.com/apache/kafka/pull/8127 (2.5 cherry-pick)
> >
> > Thanks,
> > Viktor
> >
> >
> > On Mon, Feb 24, 2020 at 7:22 PM David Arthur  wrote:
> >
> > > Thanks, Tu. I've moved KIP-467 out of the release plan.
> > >
> > > -David
> > >
> > > On Thu, Feb 20, 2020 at 6:00 PM Tu Tran  wrote:
> > >
> > > > Hi David,
> > > >
> > > > Thanks for being the release main driver. Since the implementation
> for
> > > the
> > > > last part of KIP-467 wasn't finalized prior to Feb 12th, could you
> > remove
> > > > KIP-467 from the list?
> > > >
> > > > Thanks,
> > > > Tu
> > > >
> > > > On Thu, Feb 20, 2020 at 7:18 AM David Arthur 
> wrote:
> > > >
> > > > > Randall / Konstantine,
> > > > >
> > > > > Sorry for the late reply. Thanks for the fix and for the update! I
> > see
> > > > this
> > > > > change on the 2.5 branch (@b403c66). Consider this a retroactive
> > > approval
> > > > > for this bugfix :)
> > > > >
> > > > > -David
> > > > >
> > > > > On Fri, Feb 14, 2020 at 2:21 PM Konstantine Karantasis <
> > > > > konstant...@confluent.io> wrote:
> > > > >
> > > > > > Hi David,
> > > > > >
> > > > > > I want to confirm what Randall mentions above. The code fixes for
> > > > > > KAFKA-9556 were in place before code freeze on Wed, but we spent
> a
> > > bit
> > > > > more
> > > > > > time hardening the conditions of the integration tests and fixing
> > > some
> > > > > > jenkins branch builders to run the test on repeat.
> > > > > >
> > > > > > Best,
> > > > > > Konstantine
> > > > > >
> > > > > >
> > > > > > On Fri, Feb 14, 2020 at 7:42 AM Randall Hauch 
> > > > wrote:
> > > > > >
> > > > > > > Hi, David.
> > > > > > >
> > > > > > > I just filed https://issues.apache.org/jira/browse/KAFKA-9556
> > that
> > > > > > > identifies two pretty minor issues with the new KIP-558 that
> adds
> > > new
> > > > > > > Connect REST API endpoints to get the list of topics used by a
> > > > > connector.
> > > > > > > The impact is high: the feature cannot be fully disabled, and
> > > Connect
> > > > > > does
> > > > > > > not automatically reset the topic set when a connector is
> > deleted.
> > > > > > > https://github.com/apache/kafka/pull/8085 includes the two
> > fixes,
> > > > and
> > > > > > also
> > > > > > > adds more unit and integration tests for this feature.
> Although I
> > > > just
> > > > > > > created the blocker this AM, Konstantine has actually be
> working
> > on
> > > > the
> > > > > > fix
> > > > > > > for four days. Risk of merging this PR is low, since a) the new
> > > > > > integration
> > > > > > > tests add significant coverage and we've run the new tests
> > numerous
> > > > > > times,
> > > > > > > and b) the fixes help gate the new feature even more and allow
> > the
> > > > > > feature
> > > > > > > to be completely disabled.
> > > > > > >
> > > > > > > I'd like approve to merge
> > > https://github.com/apache/kafka/pull/8085
> > > > > > >
> > > > > > > Thanks!
> > > > > > > Randall
> > > > > > >
> > > > > > > On Mon, Feb 10, 2020 at 11:31 AM David Arthur <
> mum...@gmail.com>
> > > > > wrote:
> > > > > > >
> > > > > > > > Just a friendly reminder that this Wednesday, February 12th,
> is
> > > the
> > > > > > code
> > > > > > > > freeze for the 2.5.0 release. After this time we will only
> > accept
> > > > > > blocker
> > > > > > > > bugs onto the release branch.
> > > > > > > >
> > > > > > > > Thanks!
> > > > > > > > David
> > > > > > > >
> > > > > > > > On Fri, Jan 31, 2020 at 5:13 PM David Arthur <
> mum...@gmail.com
> > >
> > > > > wrote:
> > > > > > > >
> > > > > > > > > Thanks! I've updated the list.
> > > > > > > > >
> > > > > > > > > On Thu, Jan 30, 2020 at 5:48 PM Konstantine Karantasis <
> > > > > > > > > konstant...@confluent.io> wrote:
> > > > > > > > >
> > > > > > > > >> Hi David,
> > > > > > > > >>
> > > > > > > > >> thanks for driving the release.
> > > > > > > > >>
> > > > > > > > >> Please also remove KIP-158 from the list of KIPs that you
> > plan
> > > > to
> > > > > > > > include
> > > > > > > > >> in 2.5
> > > > > > > > >> KIP-158 has been accepted, but the implementation is not
> yet
> > > > > final.
> > > > > > It
> > > > > > > > >> will be included in the release that follows 2.5.
> 

Build failed in Jenkins: kafka-trunk-jdk8 #4274

2020-02-26 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Fix gradle error writing test stdout (#8133)


--
[...truncated 2.88 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED


Re: Discussion thread for KIP-X

2020-02-26 Thread Boyang Chen
Thanks Rangan for the input! We shall do a retro for Q1. I will reflect your 
use cases there and notify you if we have the next step plan to help out.

Boyang


From: Rangan Prabhakaran (BLOOMBERG/ 919 3RD A) 
Sent: Tuesday, February 25, 2020 2:57 AM
To: dev@kafka.apache.org 
Cc: bche...@outlook.com 
Subject: Re: Discussion thread for KIP-X

Hi Boyang,
The Kafka clusters we manage are multi-tenant clusters hosting anywhere from 
hundreds to a few thousand different workloads on any given cluster.

For our setup, we have noticed that the breaking limit wrt partition count is 
around 10k partitions per broker. Beyond this point, we start seeing 
significant replication slowness, election slowness, issues around too many 
files opened etc

The type of workloads on our clusters that would benefit from the proposal 
outlined in this KIP are

  *   Bursty workloads such as workloads that flood the topic once an hour and 
need to be processed quickly within a strict time window
  *   Workloads that are using topics as simple queues (stateless and don’t 
care about ordering within a partition)
  *   Stream processing workloads where parallelism is driven by the number of 
input topic partitions

Currently, we are over provisioning partitions to efficiently serve these 
workloads which results in significant under-utilization of the respective 
clusters.

Additionally, we are also seeing quite a few workloads that are relying on the 
partition level ordering guarantees today and are filtering out the keys they 
don’t care about on the client side. These workloads would benefit from the key 
level ordering proposed in KIP-X and result in much simpler application logic 
for clients.

Let me know if this helps and if you have any further questions
Rangan

From: dev@kafka.apache.org At: 02/21/20 15:45:49
To: dev@kafka.apache.org
Cc: Rangan Prabhakaran (BLOOMBERG/ 919 3RD A ) 
 , 
bche...@outlook.com
Subject: Re: Discussion thread for KIP-X

Hey Rangan,

thanks for the interest! In fact we are still in the design phase, and need
more supporting use cases that requires a higher scaling factor than number
of partitions. It would be good if you could share some of your needed use
case when the unit time of processing one record is the bottleneck, or some
cost wise concern of over-partitioning.

Boyang

On Fri, Feb 21, 2020 at 10:44 AM Guozhang Wang 
mailto:wangg...@gmail.com>> wrote:

> cc @Boyang Chen mailto:bche...@outlook.com>> who 
> authored this draft.
>
>
> Guozhang
>
> On Fri, Feb 21, 2020 at 10:29 AM Rangan Prabhakaran (BLOOMBERG/ 919 3RD A)
> <
> kprabhaka...@bloomberg.net> wrote:
>
> > Hi,
> > A few of us have been following KIP-X. We are interested in the roadmap /
> > plan there and would like to contribute towards the same.
> >
> > What are the next steps to discuss / iterate on this KIP ? Currently, its
> > in draft state and there does not seem to be a discussion thread attached
> > yet.
> >
> > KIP -
> >
>
https://cwiki.apache.org/confluence/display/KAFKA/KIP-X%3A+Introduce+a+cooperati
ve+consumer+processing+semantic
> >
> > Thanks
> > Rangan
>
>
>
> --
> -- Guozhang
>



[DISCUSS] KIP-574: CLI Dynamic Configuration with file input

2020-02-26 Thread Aneel Nazareth
Hi,

I'd like to discuss adding a new argument to kafka-configs.sh
(ConfigCommand.scala).

Recently I've been working on some things that require complex
configurations. I've chosen to represent them as JSON strings in my
server.properties. This works well, and I'm able to update the
configurations by editing server.properties and restarting the broker. I've
added the ability to dynamically configure them, and that works well using
the AdminClient. However, when I try to update these configurations using
kafka-configs.sh, I run into a problem. My configurations contain commas,
and kafka-configs.sh tries to break them up into key/value pairs at the
comma boundary.

I'd like to enable setting these configurations from the command line, so
I'm proposing that we add a new option to kafka-configs.sh that takes a
properties file.

I've created a KIP for this idea:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-574%3A+CLI+Dynamic+Configuration+with+file+input
And a JIRA: https://issues.apache.org/jira/browse/KAFKA-9612

I'd appreciate your feedback on the proposal.

Thanks,
Aneel


[jira] [Created] (KAFKA-9612) CLI Dynamic Configuration with file input

2020-02-26 Thread Aneel Nazareth (Jira)
Aneel Nazareth created KAFKA-9612:
-

 Summary: CLI Dynamic Configuration with file input
 Key: KAFKA-9612
 URL: https://issues.apache.org/jira/browse/KAFKA-9612
 Project: Kafka
  Issue Type: Improvement
  Components: core
Reporter: Aneel Nazareth


Add a --add-config-file option to kafka-configs.sh

More details here: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-574%3A+CLI+Dynamic+Configuration+with+file+input



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] Apache Kafka 2.5.0 release

2020-02-26 Thread David Arthur
Viktor, the change LGTM. I've approved and merged the cherry-pick version
into 2.5.

Thanks!
David

On Tue, Feb 25, 2020 at 4:43 AM Viktor Somogyi-Vass 
wrote:

> Hi David,
>
> There are two short JIRAs related to KIP-352 that documents the newly added
> metrics. Is it possible to merge them in?
> https://github.com/apache/kafka/pull/7434 (trunk)
> https://github.com/apache/kafka/pull/8127 (2.5 cherry-pick)
>
> Thanks,
> Viktor
>
>
> On Mon, Feb 24, 2020 at 7:22 PM David Arthur  wrote:
>
> > Thanks, Tu. I've moved KIP-467 out of the release plan.
> >
> > -David
> >
> > On Thu, Feb 20, 2020 at 6:00 PM Tu Tran  wrote:
> >
> > > Hi David,
> > >
> > > Thanks for being the release main driver. Since the implementation for
> > the
> > > last part of KIP-467 wasn't finalized prior to Feb 12th, could you
> remove
> > > KIP-467 from the list?
> > >
> > > Thanks,
> > > Tu
> > >
> > > On Thu, Feb 20, 2020 at 7:18 AM David Arthur  wrote:
> > >
> > > > Randall / Konstantine,
> > > >
> > > > Sorry for the late reply. Thanks for the fix and for the update! I
> see
> > > this
> > > > change on the 2.5 branch (@b403c66). Consider this a retroactive
> > approval
> > > > for this bugfix :)
> > > >
> > > > -David
> > > >
> > > > On Fri, Feb 14, 2020 at 2:21 PM Konstantine Karantasis <
> > > > konstant...@confluent.io> wrote:
> > > >
> > > > > Hi David,
> > > > >
> > > > > I want to confirm what Randall mentions above. The code fixes for
> > > > > KAFKA-9556 were in place before code freeze on Wed, but we spent a
> > bit
> > > > more
> > > > > time hardening the conditions of the integration tests and fixing
> > some
> > > > > jenkins branch builders to run the test on repeat.
> > > > >
> > > > > Best,
> > > > > Konstantine
> > > > >
> > > > >
> > > > > On Fri, Feb 14, 2020 at 7:42 AM Randall Hauch 
> > > wrote:
> > > > >
> > > > > > Hi, David.
> > > > > >
> > > > > > I just filed https://issues.apache.org/jira/browse/KAFKA-9556
> that
> > > > > > identifies two pretty minor issues with the new KIP-558 that adds
> > new
> > > > > > Connect REST API endpoints to get the list of topics used by a
> > > > connector.
> > > > > > The impact is high: the feature cannot be fully disabled, and
> > Connect
> > > > > does
> > > > > > not automatically reset the topic set when a connector is
> deleted.
> > > > > > https://github.com/apache/kafka/pull/8085 includes the two
> fixes,
> > > and
> > > > > also
> > > > > > adds more unit and integration tests for this feature. Although I
> > > just
> > > > > > created the blocker this AM, Konstantine has actually be working
> on
> > > the
> > > > > fix
> > > > > > for four days. Risk of merging this PR is low, since a) the new
> > > > > integration
> > > > > > tests add significant coverage and we've run the new tests
> numerous
> > > > > times,
> > > > > > and b) the fixes help gate the new feature even more and allow
> the
> > > > > feature
> > > > > > to be completely disabled.
> > > > > >
> > > > > > I'd like approve to merge
> > https://github.com/apache/kafka/pull/8085
> > > > > >
> > > > > > Thanks!
> > > > > > Randall
> > > > > >
> > > > > > On Mon, Feb 10, 2020 at 11:31 AM David Arthur 
> > > > wrote:
> > > > > >
> > > > > > > Just a friendly reminder that this Wednesday, February 12th, is
> > the
> > > > > code
> > > > > > > freeze for the 2.5.0 release. After this time we will only
> accept
> > > > > blocker
> > > > > > > bugs onto the release branch.
> > > > > > >
> > > > > > > Thanks!
> > > > > > > David
> > > > > > >
> > > > > > > On Fri, Jan 31, 2020 at 5:13 PM David Arthur  >
> > > > wrote:
> > > > > > >
> > > > > > > > Thanks! I've updated the list.
> > > > > > > >
> > > > > > > > On Thu, Jan 30, 2020 at 5:48 PM Konstantine Karantasis <
> > > > > > > > konstant...@confluent.io> wrote:
> > > > > > > >
> > > > > > > >> Hi David,
> > > > > > > >>
> > > > > > > >> thanks for driving the release.
> > > > > > > >>
> > > > > > > >> Please also remove KIP-158 from the list of KIPs that you
> plan
> > > to
> > > > > > > include
> > > > > > > >> in 2.5
> > > > > > > >> KIP-158 has been accepted, but the implementation is not yet
> > > > final.
> > > > > It
> > > > > > > >> will be included in the release that follows 2.5.
> > > > > > > >>
> > > > > > > >> Regards,
> > > > > > > >> Konstantine
> > > > > > > >>
> > > > > > > >> On 1/30/20, Matthias J. Sax  wrote:
> > > > > > > >> > Hi David,
> > > > > > > >> >
> > > > > > > >> > the following KIP from the list did not make it:
> > > > > > > >> >
> > > > > > > >> >  - KIP-216 (no PR yet)
> > > > > > > >> >  - KIP-399 (no PR yet)
> > > > > > > >> >  - KIP-401 (PR not merged yet)
> > > > > > > >> >
> > > > > > > >> >
> > > > > > > >> > KIP-444 should be included as we did make progress, but it
> > is
> > > > > still
> > > > > > > not
> > > > > > > >> > fully implement and we need to finish in in 2.6 release.
> > > > > > > >> >
> > > > > > > >> > KIP-447 is partially implemented in 2.5 (ie, broker and
> > > > > > > >> > consumer/producer 

Re: Permission to create KIP

2020-02-26 Thread Guozhang Wang
Hello Aneel, I've added your id to the wiki space.


Guozhang

On Wed, Feb 26, 2020 at 7:40 AM Aneel Nazareth  wrote:

> Hi, I'd like to create a KIP. Would you please give user "aneel" permission
> to do so?
>
> Thanks,
> Aneel Nazareth
>


-- 
-- Guozhang


Re: [DISCUSS] KIP-405: Kafka Tiered Storage

2020-02-26 Thread Harsha Chintalapani
On Tue, Feb 25, 2020 at 12:46 PM, Jun Rao  wrote:

> Hi, Satish,
>
> Thanks for the updated doc. The new API seems to be an improvement
> overall. A few more comments below.
>
> 100. For each of the operations related to tiering, it would be useful to
> provide a description on how it works with the new API. These include
> things like consumer fetch, replica fetch, offsetForTimestamp, retention
> (remote and local) by size, time and logStartOffset, topic deletion, etc.
> This will tell us if the proposed APIs are sufficient.
>

Thanks for the feedback Jun. We will add more details around this.


> 101. For the default implementation based on internal topic, is it meant
> as a proof of concept or for production usage? I assume that it's the
> former. However, if it's the latter, then the KIP needs to describe the
> design in more detail.
>

Yes it meant to be for production use.  Ideally it would be good to merge
this in as the default implementation for metadata service. We can add more
details around design and testing.

102. When tiering a segment, the segment is first written to the object
> store and then its metadata is written to RLMM using the api "void
> putRemoteLogSegmentData()".
> One potential issue with this approach is that if the system fails after
> the first operation, it leaves a garbage in the object store that's never
> reclaimed. One way to improve this is to have two separate APIs, sth like
> preparePutRemoteLogSegmentData() and commitPutRemoteLogSegmentData().
>
> 103. It seems that the transactional support and the ability to read from
> follower are missing.
>
> 104. It would be useful to provide a testing plan for this KIP.
>

We are working on adding more details around transactional support and
coming up with test plan.
Add system tests and integration tests.

Thanks,
>
> Jun
>
> On Mon, Feb 24, 2020 at 8:10 AM Satish Duggana 
> wrote:
>
> Hi Jun,
> Please look at the earlier reply and let us know your comments.
>
> Thanks,
> Satish.
>
> On Wed, Feb 12, 2020 at 4:06 PM Satish Duggana 
> wrote:
>
> Hi Jun,
> Thanks for your comments on the separation of remote log metadata storage
> and remote log storage.
> We had a few discussions since early Jan on how to support eventually
> consistent stores like S3 by uncoupling remote log segment metadata and
> remote log storage. It is written with details in the doc here(1). Below is
> the brief summary of the discussion from that doc.
>
> The current approach consists of pulling the remote log segment metadata
> from remote log storage APIs. It worked fine for storages like HDFS. But
> one of the problems of relying on the remote storage to maintain metadata
> is that tiered-storage needs to be strongly consistent, with an impact not
> only on the metadata(e.g. LIST in S3) but also on the segment data(e.g. GET
> after a DELETE in S3). The cost of maintaining metadata in remote storage
> needs to be factored in. This is true in the case of S3, LIST APIs incur
> huge costs as you raised earlier.
> So, it is good to separate the remote storage from the remote log metadata
> store. We refactored the existing RemoteStorageManager and introduced
> RemoteLogMetadataManager. Remote log metadata store should give strong
> consistency semantics but remote log storage can be eventually consistent.
> We can have a default implementation for RemoteLogMetadataManager which
> uses an internal topic(as mentioned in one of our earlier emails) as
> storage. But users can always plugin their own RemoteLogMetadataManager
> implementation based on their environment.
>
> Please go through the updated KIP and let us know your comments. We have
> started refactoring for the changes mentioned in the KIP and there may be a
> few more updates to the APIs.
>
> [1]
>
> https://docs.google.com/document/d/
> 1qfkBCWL1e7ZWkHU7brxKDBebq4ie9yK20XJnKbgAlew/edit?ts=5e208ec7#
>
> On Fri, Dec 27, 2019 at 5:43 PM Ivan Yurchenko 
>
> wrote:
>
> Hi all,
>
> Jun:
>
> (a) Cost: S3 list object requests cost $0.005 per 1000 requests. If
>
> you
>
> have 100,000 partitions and want to pull the metadata for each
>
> partition
>
> at
>
> the rate of 1/sec. It can cost $0.5/sec, which is roughly $40K per
>
> day.
>
> I want to note here, that no reasonably durable storage will be cheap at
> 100k RPS. For example, DynamoDB might give the same ballpark
>
> figures.
>
> If we want to keep the pull-based approach, we can try to reduce this
>
> number
>
> in several ways: doing listings less frequently (as Satish mentioned, with
> the current defaults it's ~3.33k RPS for your example), batching listing
> operations in some way (depending on the storage; it might require the
> change of RSM's interface).
>
> There are different ways for doing push based metadata propagation.
>
> Some
>
> object stores may support that already. For example, S3 supports
>
> events
>
> notification
>
> This sounds interesting. However, I see a couple of issues using it:
> 1. As I understand the 

Permission to create KIP

2020-02-26 Thread Aneel Nazareth
Hi, I'd like to create a KIP. Would you please give user "aneel" permission
to do so?

Thanks,
Aneel Nazareth


Re: [DISCUSS] KIP-508: Make Suppression State Queriable - rebooted.

2020-02-26 Thread Dongjin Lee
> I was under the impression that you wanted to expand the scope of the KIP
to additionally allow querying the internal buffer, not just the result.
Can you clarify whether you are proposing to allow querying the state of
the internal buffer, the result, or both?

Sorry for the confusion. As we already talked with, we only need to query
the suppressed output, not the internal buffer. The current implementation
is wrong. After refining the KIP and implementation accordingly I will
notify you - I must be confused, also.

Thanks,
Dongjin

On Tue, Feb 25, 2020 at 12:17 AM John Roesler  wrote:

> Hi Dongjin,
>
> Ah, I think I may have been confused. I 100% agree that we need a
> materialized variant for suppress(). Then, you could do:
> ...suppress(..., Materialized.as(“final-count”))
>
> If that’s your proposal, then we are on the same page.
>
> I was under the impression that you wanted to expand the scope of the KIP
> to additionally allow querying the internal buffer, not just the result.
> Can you clarify whether you are proposing to allow querying the state of
> the internal buffer, the result, or both?
>
> Thanks,
> John
>
> On Thu, Feb 20, 2020, at 08:41, Dongjin Lee wrote:
> > Hi John,
> > Thanks for your kind explanation with an example.
> >
> > > But it feels like you're saying you're trying to do something different
> > than just query the windowed key and get back the current count?
> >
> > Yes, for example, what if we need to retrieve the (all or range) keys
> with
> > a closed window? In this example, let's imagine we need to retrieve only
> > (key=A, window=10), not (key=A, window=20).
> >
> > Of course, the value accompanied by a flushed key is exactly the same to
> > the one in the upstream KTable; However, if our intention is not pointing
> > out a specific key but retrieving a group of unspecified keys, we stuck
> in
> > trouble - since we can't be sure which key is flushed out beforehand.
> >
> > One workaround would be materializing it with `suppressed.filter(e ->
> true,
> > Materialized.as("final-count"))`. But I think providing a materialized
> > variant for suppress method is better than this workaround.
> >
> > Thanks,
> > Dongjin
> >
> > On Thu, Feb 20, 2020 at 1:26 AM John Roesler 
> wrote:
> >
> > > Thanks for the response, Dongjin,
> > >
> > > I'm sorry, but I'm still not following. It seems like the view you
> would
> > > get on the "current state of the buffer" would always be equivalent to
> > > the view of the upstream table.
> > >
> > > Let me try an example, and maybe you can point out the flaw in my
> > > reasoning.
> > >
> > > Let's say we're doing 10 ms windows with a grace period of zero.
> > > Let's also say we're computing a windowed count, and that we have
> > > a "final results" suppression after the count. Let's  materialize the
> > > count as "Count" and the suppressed result as "Final Count".
> > >
> > > Suppose we get an input event:
> > > (time=10, key=A, value=...)
> > >
> > > Then, Count will look like:
> > >
> > > | window | key | value |
> > > | 10 | A   | 1 |
> > >
> > > The (internal) suppression buffer will contain:
> > >
> > > | window | key | value |
> > > | 10 | A   | 1 |
> > >
> > > The record is still buffered because the window isn't closed yet.
> > > Final Count is an empty table:
> > >
> > > | window | key | value |
> > >
> > > ---
> > >
> > > Now, we get a second event:
> > > (time=15, key=A, value=...)
> > >
> > > Then, Count will look like:
> > >
> > > | window | key | value |
> > > | 10 | A   | 2 |
> > >
> > > The (internal) suppression buffer will contain:
> > >
> > > | window | key | value |
> > > | 10 | A   | 2 |
> > >
> > > The record is still buffered because the window isn't closed yet.
> > > Final Count is an empty table:
> > >
> > > | window | key | value |
> > >
> > >
> > > ---
> > >
> > > Finally, we get a third event:
> > > (time=20, key=A, value=...)
> > >
> > > Then, Count will look like:
> > >
> > > | window | key | value |
> > > | 10 | A   | 2 |
> > > | 20 | A   | 1 |
> > >
> > > The (internal) suppression buffer will contain:
> > >
> > > | window | key | value |
> > > | 20 | A   | 1 |
> > >
> > > Note that window 10 has been flushed out, because it's now closed.
> > > And window 20 is buffered because it isn't closed yet.
> > > Final Count is now:
> > >
> > > | window | key | value |
> > > | 10 | A   | 2 |
> > >
> > >
> > > ---
> > >
> > > Reading your email, I can't figure out what value there is in querying
> the
> > > internal suppression buffer, since it only contains exactly the same
> value
> > > as
> > > the upstream table, for each key that is still buffered. But it feels
> like
> > > you're saying you're trying to do something different than just query
> the
> > > windowed key and get back the current count?
> > >
> > > Thanks,
> > > -John
> > >
> > >
> > > On Wed, Feb 19, 2020, at 09:49, Dongjin Lee wrote:
> > > > Hi 

Re: access to jiira

2020-02-26 Thread Bill Bejeck
Hi Stepanenko,

You've been added as a contributor to Jira, so you should be able to
self-assign tickets now.

Thanks for your interest in Apache Kafka.

-Bill

On Wed, Feb 26, 2020 at 4:14 AM we studio  wrote:

> Hello. I would like to contribute to Kafka. Could you please add me to
> jira? My jira account username: avalsa (Stepanenko Vyacheslav)
>


Re: [KAFKA-557] Add emit on change support for Kafka Streams

2020-02-26 Thread Bruno Cadonna
Hi Richard,

1. Could you change "idempotent update operations will only be dropped
from KTables, not from other classes." -> idempotent update operations
will only be dropped from materialized KTables? For non-materialized
KTables -- as they can occur after optimization of the topology -- we
cannot drop idempotent updates.

2. I cannot completely follow the metrics section. Do you want to
record all idempotent updates or only the dropped ones? In particular,
I do not understand the following sentences:
"For that matter, even if we don't drop idempotent updates, we should
at the very least record the number of idempotent updates that has
been seen go through a particular processor."
"Therefore, we should add some metrics which will count the number of
idempotent updates that each node has seen."
I do not see how we can record idempotent updates that we do not drop.
If we see them, we should drop them. If we do not see them, we cannot
drop them and we cannot record them.

Best,
Bruno

On Wed, Feb 26, 2020 at 4:57 AM Richard Yu  wrote:
>
> Hi John,
>
> Sounds goods. It looks like we are close to wrapping things up. If there
> isn't any other revisions which needs to be made. (If so, please comment in
> the thread)
> I will start the voting process this Thursday (Pacific Standard Time).
>
> Cheers,
> Richard
>
> On Tue, Feb 25, 2020 at 11:59 AM John Roesler  wrote:
>
> > Hi Richard,
> >
> > Sorry for the slow reply. I actually think we should avoid checking
> > equals() for now. Your reasoning is good, but the truth is that
> > depending on the implementation of equals() is non-trivial,
> > semantically, and (though I proposed it before), I'm not convinced
> > it's worth the risk. Much better to start with exactly one kind of
> > "idempotence detection".
> >
> > Even if someone does update their serdes, we know that the new
> > serde would still be able to _de_serialize the old format, or the whole
> > app would break. The situation is that the new result gets encoded
> > in the new binary format, which means we don't detect an idempotent
> > update for what it is. In this case, we'd write the new binary format to
> > disk and the changelog, and forward it downstream. However, we only
> > do this once. Now that the binary format for that record has been updated,
> > we would correctly detect idempotence of any subsequent updates.
> >
> > Plus, we would still be able to filter out idempotent updates in
> > repartition
> > sinks, since for those, we use the new serde to serialize both the "old"
> > and
> > "new" result.
> >
> > It's certainly a good observation, but I think we can just make a note of
> > it
> > in "rejected alternatives" for now, and plan to refine it later, if it does
> > pose a big performance problem.
> >
> > Thanks!
> > -John
> >
> > On Sat, Feb 22, 2020, at 18:14, Richard Yu wrote:
> > > Hi all,
> > >
> > > Updated the KIP.
> > >
> > > Just a question: do you think it would be a good idea if we check for
> > both
> > > Object#equals() and binary equality?
> > > Because there might be some subtle changes in the serialization (for
> > > example, if the user decides to upgrade their serialization procedure to
> > a
> > > new one), but the underlying values of the result might be the same.
> > > (therefore equals() might return true)
> > >
> > > Do you think this would be plausible?
> > >
> > > Cheers,
> > > Richard
> > >
> > > On Fri, Feb 21, 2020 at 2:37 PM Richard Yu 
> > > wrote:
> > >
> > > > Hello,
> > > >
> > > > Just to make some updates. I changed the name of the metric so that it
> > was
> > > > more in line with usual Kafka naming conventions for metrics / sensors.
> > > > Below is the updated description of the metric:
> > > >
> > > > dropped-idempotent-updates : (Level 2 - Per Task) DEBUG (rate | total)
> > > >
> > > > Description: This metric will record the number of updates that have
> > been
> > > > dropped since they are essentially re-performing an earlier operation.
> > > >
> > > > Note:
> > > >
> > > >- The rate option indicates the ratio of records dropped to actual
> > > >volume of records passing through the task.
> > > >- The total option will just give a raw count of the number of
> > records
> > > >dropped.
> > > >
> > > >
> > > > I hope that this is more on point.
> > > >
> > > > Best,
> > > > Richard
> > > >
> > > > On Fri, Feb 21, 2020 at 2:20 PM Richard Yu  > >
> > > > wrote:
> > > >
> > > >> Hi all,
> > > >>
> > > >> Thanks for the clarification. I was just confused a little on what was
> > > >> going on.
> > > >>
> > > >> So I guess then that for the actual proposal. We got the following:
> > > >>
> > > >> 1. We check for binary equality, and perform no extra look ups.
> > > >> 2. Emphasize that this applies only to materialized tables.
> > > >> 3. We drop aggregation updates if key, value and timestamp is the
> > same.
> > > >>
> > > >> Then that settles the behavior changes. So it looks like the Metric
> > that
> > > >> is the only thing that 

[jira] [Created] (KAFKA-9611) KGroupedTable.aggregate(...) emits incorrect values

2020-02-26 Thread Neil Green (Jira)
Neil Green created KAFKA-9611:
-

 Summary: KGroupedTable.aggregate(...) emits incorrect values
 Key: KAFKA-9611
 URL: https://issues.apache.org/jira/browse/KAFKA-9611
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.4.0
Reporter: Neil Green


I've run into what appears to be strange behaviour in a streams app.

I have a KTable produced from a topic. The table contains entries like 
"abc1234/signal1" : 1, "abc1234/signal2" : 3
The key is "id/signal name" and the value is an int. I want to produce a 
aggregate ktable containing the sum all of the
signals for a given id.

{{So if source ktable contains:}}

{{+-+---+}}
{{| abc1234/signal1 | 2 |}}
{{| abc1234/signal2 | 4 |}}
{{| abc4566/signal1 | 3 |}}
{{+-+---+}}

{{Then the output should contain}}

{{+-+---+}}
{{| abc1234 | 6 |}}
{{| abc4566 | 3 |}}
{{+-+---+}}




{{On a change}}

{{+-+---+}}
{{| abc1234/signal1 | 3 |}}
{{+-+---+}}
{{```}}
{{I would expect the change}}
{{```}}
{{+-+---+}}
{{| abc1234 | 7 |}}
{{+-+---+}}

{{to be published.}}

In fact there are two changelog entries published

{{+-+---+}}
{{| abc1234 | 4 | // This is incorrect. The sum of the signals is never 4.}}
{{+-+---+}}

Then

{{+-+---+}}
{{| abc1234 | 7 |}}
{{+-+---+}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


access to jiira

2020-02-26 Thread we studio
Hello. I would like to contribute to Kafka. Could you please add me to jira? My 
jira account username: avalsa (Stepanenko Vyacheslav)


Build failed in Jenkins: kafka-trunk-jdk8 #4273

2020-02-26 Thread Apache Jenkins Server
See 


Changes:

[github] HOTFIX: compile error in EmbeddedKafkaCluster (#8170)

[manikumar] KAFKA-9594: Add a separate lock to pause the follower log append 
while


--
[...truncated 2.88 MB...]
org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:compileTestJava
> Task :streams:upgrade-system-tests-0101:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:testClasses
> Task :streams:upgrade-system-tests-0101:checkstyleTest
> Task :streams:upgrade-system-tests-0101:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:test
> Task :streams:upgrade-system-tests-0102:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0102:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0102:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0102:checkstyleMain