[jira] [Resolved] (KAFKA-9122) Externalizing DB password is not working

2020-03-09 Thread Konstantine Karantasis (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantine Karantasis resolved KAFKA-9122.
---
Resolution: Not A Bug

Closing, given that this was a configuration issue. 

> Externalizing DB password is not working
> 
>
> Key: KAFKA-9122
> URL: https://issues.apache.org/jira/browse/KAFKA-9122
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.2.1
> Environment: CentOS 6.7
>Reporter: Dwijadas
>Priority: Trivial
> Attachments: Screenshot_1.png
>
>
> Hi
> I am trying to externalizing user name and password for oracle DB using 
> {{FileConfigProvider}} provider.
> For that i have created a properties file that contains user name and 
> password.
>  
> {{$ cat /home/kfk/data/ora_credentials.properties
> ora.username="apps"
> ora.password="Passw0rd!"}}
> Added the config providers as file and also the config.providers.file.class 
> as FileConfigProvider in the worker config:
>  
> {{$ cat /home/kfk/etc/kafka/connect-distributed.properties
> ...
> ...
> config.providers=file
> config.providers.file.class=org.apache.kafka.common.config.provider.FileConfigProvider
> ...
> ...}}
> Restarted the worker and submitted a task using REST with the following config
>  
> {{"config": \{
>"connector.class": 
> "io.confluent.connect.jdbc.JdbcSourceConnector",
>"tasks.max": "1",
>  "connection.user": 
> "${file:/home/kfk/data/ora_credentials.properties:ora.username}",
>"connection.password": 
> "${file:/home/kfk/data/ora_credentials.properties:ora.password}",
>...
>...
> }}}
> Submitting the above task resulting in the following error:
>  
> {{{
>   "error_code": 400,
>   "message": "Connector configuration is invalid and contains the following 2 
> error(s):\nInvalid value java.sql.SQLException: ORA-01017: invalid 
> username/password; logon denied\n for configuration Couldn't open connection 
> to jdbc:oracle:thin:@oebsr122.infodetics.com:1521:VIS\nInvalid value 
> java.sql.SQLException: ORA-01017: invalid username/password; logon denied\n 
> for configuration Couldn't open connection to 
> jdbc:oracle:thin:@oebsr122.infodetics.com:1521:VIS\nYou can also find the 
> above list of errors at the endpoint `/\{connectorType}/config/validate`"
> }}}
> Assuming the above config does not replaces the user name and password at all 
> rather entire values for connection.user and connection.password are used to 
> connect to the DB resulting in ORA-01017: invalid username/password error.
> Is it a bug ?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[VOTE] 2.5.0 RC1

2020-03-09 Thread David Arthur
Hello Kafka users, developers and client-developers,

This is the second candidate for release of Apache Kafka 2.5.0. The first
release candidate included an erroneous NOTICE file, so another RC was
needed to fix that.

This is a major release of Kafka which includes many new features,
improvements, and bug fixes including:

* TLS 1.3 support (1.2 is now the default)
* Co-groups for Kafka Streams
* Incremental rebalance for Kafka Consumer
* New metrics for better operational insight
* Upgrade Zookeeper to 3.5.7
* Deprecate support for Scala 2.11

Release notes for the 2.5.0 release:
https://home.apache.org/~davidarthur/kafka-2.5.0-rc1/RELEASE_NOTES.html

*** Please download, test and vote by Monday, March 16th 2020 5pm PT

Kafka's KEYS file containing PGP keys we use to sign the release:
https://kafka.apache.org/KEYS

* Release artifacts to be voted upon (source and binary):
https://home.apache.org/~davidarthur/kafka-2.5.0-rc1/

* Maven artifacts to be voted upon:
https://repository.apache.org/content/groups/staging/org/apache/kafka/

* Javadoc:
https://home.apache.org/~davidarthur/kafka-2.5.0-rc1/javadoc/

* Tag to be voted upon (off 2.5 branch) is the 2.5.0 tag:
https://github.com/apache/kafka/releases/tag/2.5.0-rc1

* Documentation:
https://kafka.apache.org/25/documentation.html

* Protocol:
https://kafka.apache.org/25/protocol.html

* Links to successful Jenkins builds for the 2.5 branch to follow

Thanks,
David Arthur


Build failed in Jenkins: kafka-trunk-jdk11 #1224

2020-03-09 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9176: Do not update limit offset if we are in RESTORE_ACTIVE mode


--
[...truncated 2.91 MB...]
org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

Build failed in Jenkins: kafka-trunk-jdk8 #4305

2020-03-09 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9176: Do not update limit offset if we are in RESTORE_ACTIVE mode


--
[...truncated 2.89 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOU

Re: [Vote] KIP-569: Update DescribeConfigsResponse to include additional metadata information

2020-03-09 Thread Gwen Shapira
+1
Looks great. Thanks for the proposal, Shailesh.

Gwen Shapira
Engineering Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog

On Mon, Mar 09, 2020 at 6:00 AM, Shailesh Panwar < span...@confluent.io > wrote:

> 
> 
> 
> Hi All,
> I would like to start a vote on KIP-569: Update
> DescribeConfigsResponse to include additional metadata information
> 
> 
> 
> The KIP is here:
> https:/ / cwiki. apache. org/ confluence/ display/ KAFKA/ 
> KIP-569%3A+DescribeConfigsResponse+-+Update+the+schema+to+include+additional+metadata+information+of+the+field
> (
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-569%3A+DescribeConfigsResponse+-+Update+the+schema+to+include+additional+metadata+information+of+the+field
> )
> 
> 
> 
> Thanks,
> Shailesh
> 
> 
>

[jira] [Created] (KAFKA-9689) Automatic broker version detection to initialize stream client

2020-03-09 Thread Boyang Chen (Jira)
Boyang Chen created KAFKA-9689:
--

 Summary: Automatic broker version detection to initialize stream 
client
 Key: KAFKA-9689
 URL: https://issues.apache.org/jira/browse/KAFKA-9689
 Project: Kafka
  Issue Type: Sub-task
Reporter: Boyang Chen


Eventually we shall deprecate the flag to suppress EOS thread producer feature, 
instead we take version detection approach on broker to decide which semantic 
to use.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9176) Flaky test failure: OptimizedKTableIntegrationTest.shouldApplyUpdatesToStandbyStore

2020-03-09 Thread Guozhang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-9176.
--
Fix Version/s: 2.6.0
   Resolution: Fixed

> Flaky test failure:  
> OptimizedKTableIntegrationTest.shouldApplyUpdatesToStandbyStore
> 
>
> Key: KAFKA-9176
> URL: https://issues.apache.org/jira/browse/KAFKA-9176
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Affects Versions: 2.4.0
>Reporter: Manikumar
>Assignee: Guozhang Wang
>Priority: Major
>  Labels: flaky-test
> Fix For: 2.6.0
>
>
> h4. 
> [https://builds.apache.org/blue/organizations/jenkins/kafka-2.4-jdk8/detail/kafka-2.4-jdk8/65/tests]
> h4. Error
> org.apache.kafka.streams.errors.InvalidStateStoreException: Cannot get state 
> store source-table because the stream thread is PARTITIONS_ASSIGNED, not 
> RUNNING
> h4. Stacktrace
> org.apache.kafka.streams.errors.InvalidStateStoreException: Cannot get state 
> store source-table because the stream thread is PARTITIONS_ASSIGNED, not 
> RUNNING
>  at 
> org.apache.kafka.streams.state.internals.StreamThreadStateStoreProvider.stores(StreamThreadStateStoreProvider.java:51)
>  at 
> org.apache.kafka.streams.state.internals.QueryableStoreProvider.getStore(QueryableStoreProvider.java:59)
>  at org.apache.kafka.streams.KafkaStreams.store(KafkaStreams.java:1129)
>  at 
> org.apache.kafka.streams.integration.OptimizedKTableIntegrationTest.shouldApplyUpdatesToStandbyStore(OptimizedKTableIntegrationTest.java:157)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>  at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>  at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:365)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>  at org.junit.runners.ParentRunner$4.run(ParentRunner.java:330)
>  at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:78)
>  at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:328)
>  at org.junit.runners.ParentRunner.access$100(ParentRunner.java:65)
>  at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:292)
>  at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
>  at org.junit.runners.ParentRunner.run(ParentRunner.java:412)
>  at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
>  at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
>  at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
>  at 
> org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
>  at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>  at sun.reflect.GeneratedMethodAccessor24.invoke(Unknown Source)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
>  at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>  at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
>  at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
>  at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>  at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker

[jira] [Resolved] (KAFKA-3702) SslTransportLayer.close() does not shutdown gracefully

2020-03-09 Thread Ismael Juma (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-3702.

Fix Version/s: 2.0.0
   Resolution: Fixed

> SslTransportLayer.close() does not shutdown gracefully
> --
>
> Key: KAFKA-3702
> URL: https://issues.apache.org/jira/browse/KAFKA-3702
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Major
> Fix For: 2.0.0
>
>
> The warning "Failed to send SSL Close message" occurs very frequently when 
> SSL connections are closed. Close should write outbound data and shutdown 
> gracefully.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Can I ask protocol questions here?

2020-03-09 Thread Adam Bellemare
Hi Chris 

I think it’s fine to ask it here. I’m not aware of any rules against it.

Adam 

> On Mar 9, 2020, at 10:43 AM, Chris Done  wrote:
> 
> Hi all,
> 
> I'm writing a Kafka client at the protocol level and was wondering whether 
> here, or the users@ mailing list was more appropriate for questions of that 
> nature?
> 
> I looked on the web site, but didn't see clarification on this point.
> 
> I'll start a fresh thread if here is indeed the correct place. I have a 
> question to ask about fetch requests not working in specific conditions.
> 
> Thanks,
> 
> Chris


Re: [DISCUSS] Apache Kafka 2.5.0 release

2020-03-09 Thread Bruno Cadonna
Hi Ismael,

I understand your reasoning. I thought, since it is a broken feature,
I better let this thread know about it.

Best,
Bruno

On Mon, Mar 9, 2020 at 10:00 PM Ismael Juma  wrote:
>
> Hi Bruno,
>
> 2.4.0 has been out for months. If this was not noticed since then, I don't
> think it qualifies as a blocker for 2.5.0.
>
> Ismael
>
> On Mon, Mar 9, 2020 at 8:44 PM Bruno Cadonna  wrote:
>
> > Hi David,
> >
> > A bug report was filed that can be considered a blocker. Basically,
> > with this bug all RocksDB metrics reported by Streams are constant
> > zero. The bug is there since 2.4., so it is not a regression, but a
> > broken feature.
> >
> > Here is the ticket: https://issues.apache.org/jira/browse/KAFKA-9675
> > Here is the fix: https://github.com/apache/kafka/pull/8256
> >
> > Best,
> > Bruno
> >
> > On Wed, Feb 26, 2020 at 11:22 PM Randall Hauch  wrote:
> > >
> > > Thanks, David. The PR has been merged to trunk and 2.5, and I'm
> > backporting
> > > to earlier branches. I'll resolve
> > > https://issues.apache.org/jira/browse/KAFKA-9601 when I finish
> > backporting.
> > >
> > > On Wed, Feb 26, 2020 at 1:28 PM David Arthur  wrote:
> > >
> > > > Thanks, Randall. Leaking sensitive config to the logs seems fairly
> > > > severe. I think should include this. Let's proceed with cherry-picking
> > to
> > > > 2.5.
> > > >
> > > > -David
> > > >
> > > > On Wed, Feb 26, 2020 at 2:25 PM Randall Hauch 
> > wrote:
> > > >
> > > > > Hi, David.
> > > > >
> > > > > If we're still not quite ready for an RC, I'd like to squeeze in
> > > > > https://issues.apache.org/jira/browse/KAFKA-9601, which removes the
> > raw
> > > > > connector config properties in a DEBUG level log message. PR is ready
> > > > (test
> > > > > failures are unrelated), the risk is very low, and I think it'd be
> > great
> > > > to
> > > > > correct this sooner than later.
> > > > >
> > > > > Randall
> > > > >
> > > > > On Wed, Feb 26, 2020 at 11:26 AM David Arthur 
> > wrote:
> > > > >
> > > > > > Viktor, the change LGTM. I've approved and merged the cherry-pick
> > > > version
> > > > > > into 2.5.
> > > > > >
> > > > > > Thanks!
> > > > > > David
> > > > > >
> > > > > > On Tue, Feb 25, 2020 at 4:43 AM Viktor Somogyi-Vass <
> > > > > > viktorsomo...@gmail.com>
> > > > > > wrote:
> > > > > >
> > > > > > > Hi David,
> > > > > > >
> > > > > > > There are two short JIRAs related to KIP-352 that documents the
> > newly
> > > > > > added
> > > > > > > metrics. Is it possible to merge them in?
> > > > > > > https://github.com/apache/kafka/pull/7434 (trunk)
> > > > > > > https://github.com/apache/kafka/pull/8127 (2.5 cherry-pick)
> > > > > > >
> > > > > > > Thanks,
> > > > > > > Viktor
> > > > > > >
> > > > > > >
> > > > > > > On Mon, Feb 24, 2020 at 7:22 PM David Arthur 
> > > > wrote:
> > > > > > >
> > > > > > > > Thanks, Tu. I've moved KIP-467 out of the release plan.
> > > > > > > >
> > > > > > > > -David
> > > > > > > >
> > > > > > > > On Thu, Feb 20, 2020 at 6:00 PM Tu Tran 
> > wrote:
> > > > > > > >
> > > > > > > > > Hi David,
> > > > > > > > >
> > > > > > > > > Thanks for being the release main driver. Since the
> > > > implementation
> > > > > > for
> > > > > > > > the
> > > > > > > > > last part of KIP-467 wasn't finalized prior to Feb 12th,
> > could
> > > > you
> > > > > > > remove
> > > > > > > > > KIP-467 from the list?
> > > > > > > > >
> > > > > > > > > Thanks,
> > > > > > > > > Tu
> > > > > > > > >
> > > > > > > > > On Thu, Feb 20, 2020 at 7:18 AM David Arthur <
> > mum...@gmail.com>
> > > > > > wrote:
> > > > > > > > >
> > > > > > > > > > Randall / Konstantine,
> > > > > > > > > >
> > > > > > > > > > Sorry for the late reply. Thanks for the fix and for the
> > > > update!
> > > > > I
> > > > > > > see
> > > > > > > > > this
> > > > > > > > > > change on the 2.5 branch (@b403c66). Consider this a
> > > > retroactive
> > > > > > > > approval
> > > > > > > > > > for this bugfix :)
> > > > > > > > > >
> > > > > > > > > > -David
> > > > > > > > > >
> > > > > > > > > > On Fri, Feb 14, 2020 at 2:21 PM Konstantine Karantasis <
> > > > > > > > > > konstant...@confluent.io> wrote:
> > > > > > > > > >
> > > > > > > > > > > Hi David,
> > > > > > > > > > >
> > > > > > > > > > > I want to confirm what Randall mentions above. The code
> > fixes
> > > > > for
> > > > > > > > > > > KAFKA-9556 were in place before code freeze on Wed, but
> > we
> > > > > spent
> > > > > > a
> > > > > > > > bit
> > > > > > > > > > more
> > > > > > > > > > > time hardening the conditions of the integration tests
> > and
> > > > > fixing
> > > > > > > > some
> > > > > > > > > > > jenkins branch builders to run the test on repeat.
> > > > > > > > > > >
> > > > > > > > > > > Best,
> > > > > > > > > > > Konstantine
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > On Fri, Feb 14, 2020 at 7:42 AM Randall Hauch <
> > > > > rha...@gmail.com>
> > > > > > > > > wrote:
> > > > > > > > > > >
> > > > > > > > > > > > Hi, David.
> > > > > > > > > > > >
> > > > > 

Re: Subject: [VOTE] 2.4.1 RC0

2020-03-09 Thread Ismael Juma
Is this a blocker given that it's been like this for months and no-one
noticed? 2.4.1 seemingly has all the votes needed for the release. Why not
go ahead with it. When KAFKA-9675 is merged, it can be included in the next
release.

Ismael

On Mon, Mar 9, 2020 at 8:43 PM Bill Bejeck  wrote:

> Thanks to everyone for voting.
>
> A new blocker has surfaced
> https://issues.apache.org/jira/browse/KAFKA-9675,
> so I'll do another RC soon.
>
> Thanks again.
> Bill
>
> On Mon, Mar 9, 2020 at 1:35 PM Levani Kokhreidze 
> wrote:
>
> > +1 non-binding.
> >
> > - Built from source
> > - Ran unit tests. All passed.
> > - Quickstart passed.
> >
> > Looking forward upgrading to 2.4.1
> >
> > Regards,
> > Levani
> >
> > On Mon, 9 Mar 2020, 17:11 Sean Glover, 
> wrote:
> >
> > > +1 (non-binding).  I built from source and ran the unit test suite
> > > successfully.
> > >
> > > Thanks for running this release.  I'm looking forward to upgrading to
> > > 2.4.1.
> > >
> > > Sean
> > >
> > > On Mon, Mar 9, 2020 at 8:07 AM Mickael Maison <
> mickael.mai...@gmail.com>
> > > wrote:
> > >
> > > > Thanks for running the release!
> > > > +1 (binding)
> > > >
> > > > - Verified signatures
> > > > - Built from source
> > > > - Ran unit tests, all passed
> > > > - Ran through quickstart steps, all worked
> > > >
> > > > On Mon, Mar 9, 2020 at 11:04 AM Tom Bentley 
> > wrote:
> > > > >
> > > > > +1 (non-binding)
> > > > >
> > > > > Built from source, all unit tests passed.
> > > > >
> > > > > Thanks Bill.
> > > > >
> > > > > On Mon, Mar 9, 2020 at 3:44 AM Gwen Shapira 
> > wrote:
> > > > >
> > > > > > +1 (binding)
> > > > > >
> > > > > > Verified signatures, built jars from source, quickstart passed
> and
> > > > local
> > > > > > unit tests all passed.
> > > > > >
> > > > > > Thank you for the release Bill!
> > > > > >
> > > > > > Gwen Shapira
> > > > > > Engineering Manager | Confluent
> > > > > > 650.450.2760 | @gwenshap
> > > > > > Follow us: Twitter | blog
> > > > > >
> > > > > > On Sat, Mar 07, 2020 at 8:15 PM, Vahid Hashemian <
> > > > > > vahid.hashem...@gmail.com > wrote:
> > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > +1 (binding)
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > Verified signature, built from source, and ran quickstart
> > > > successfully
> > > > > > > (using openjdk version "11.0.6"). I also ran unit tests locally
> > > which
> > > > > > > resulted in a few flaky tests for which there are already open
> > > Jiras:
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > ReassignPartitionsClusterTest.shouldMoveSinglePartitionWithinBroker
> > > > > > > ConsumerBounceTest.testCloseDuringRebalance
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > >
> > > >
> > >
> >
> ConsumerBounceTest.testConsumerReceivesFatalExceptionWhenGroupPassesMaxSize
> > > > > > >
> > > >
> > PlaintextEndToEndAuthorizationTest.testNoConsumeWithDescribeAclViaAssign
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > >
> > > >
> > >
> >
> SaslClientsWithInvalidCredentialsTest.testManualAssignmentConsumerWithAuthenticationFailure
> > > > > > > SaslMultiMechanismConsumerTest.testCoordinatorFailover
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > Thanks for running the release Bill.
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > Regards,
> > > > > > > --Vahid
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > On Fri, Mar 6, 2020 at 9:20 AM Colin McCabe < cmccabe@ apache.
> > > org (
> > > > > > > cmcc...@apache.org ) > wrote:
> > > > > > >
> > > > > > >
> > > > > > >>
> > > > > > >>
> > > > > > >> +1 (binding)
> > > > > > >>
> > > > > > >>
> > > > > > >>
> > > > > > >> Checked the git hash and branch, looked at the docs a bit. Ran
> > > > > > quickstart
> > > > > > >> (although not the connect or streams parts). Looks good.
> > > > > > >>
> > > > > > >>
> > > > > > >>
> > > > > > >> best,
> > > > > > >> Colin
> > > > > > >>
> > > > > > >>
> > > > > > >>
> > > > > > >> On Fri, Mar 6, 2020, at 07:31, David Arthur wrote:
> > > > > > >>
> > > > > > >>
> > > > > > >>>
> > > > > > >>>
> > > > > > >>> +1 (binding)
> > > > > > >>>
> > > > > > >>>
> > > > > > >>>
> > > > > > >>> Download kafka_2.13-2.4.1 and verified signature, ran
> > quickstart,
> > > > > > >>> everything looks good.
> > > > > > >>>
> > > > > > >>>
> > > > > > >>>
> > > > > > >>> Thanks for running this release, Bill!
> > > > > > >>>
> > > > > > >>>
> > > > > > >>>
> > > > > > >>> -David
> > > > > > >>>
> > > > > > >>>
> > > > > > >>>
> > > > > > >>> On Wed, Mar 4, 2020 at 6:06 AM Eno Thereska < eno. thereska@
> > > > gmail.
> > > > > > com (
> > > > > > >>> eno.there...@gmail.com ) >
> > > > > > >>>
> > > > > > >>>
> > > > > > >>
> > > > > > >>
> > > > > > >>
> > > > > > >> wrote:
> > > > > > >>
> > > > > > >>
> > > > > > >>>
> > > > > > 
> > > > > > 
> > > > > >  Hi Bill,
> > > > > > 
> > > > > > 
> > > > > > 
> > > > > >  I built from source and ran unit and integration te

Re: [DISCUSS] Apache Kafka 2.5.0 release

2020-03-09 Thread Ismael Juma
Hi Bruno,

2.4.0 has been out for months. If this was not noticed since then, I don't
think it qualifies as a blocker for 2.5.0.

Ismael

On Mon, Mar 9, 2020 at 8:44 PM Bruno Cadonna  wrote:

> Hi David,
>
> A bug report was filed that can be considered a blocker. Basically,
> with this bug all RocksDB metrics reported by Streams are constant
> zero. The bug is there since 2.4., so it is not a regression, but a
> broken feature.
>
> Here is the ticket: https://issues.apache.org/jira/browse/KAFKA-9675
> Here is the fix: https://github.com/apache/kafka/pull/8256
>
> Best,
> Bruno
>
> On Wed, Feb 26, 2020 at 11:22 PM Randall Hauch  wrote:
> >
> > Thanks, David. The PR has been merged to trunk and 2.5, and I'm
> backporting
> > to earlier branches. I'll resolve
> > https://issues.apache.org/jira/browse/KAFKA-9601 when I finish
> backporting.
> >
> > On Wed, Feb 26, 2020 at 1:28 PM David Arthur  wrote:
> >
> > > Thanks, Randall. Leaking sensitive config to the logs seems fairly
> > > severe. I think should include this. Let's proceed with cherry-picking
> to
> > > 2.5.
> > >
> > > -David
> > >
> > > On Wed, Feb 26, 2020 at 2:25 PM Randall Hauch 
> wrote:
> > >
> > > > Hi, David.
> > > >
> > > > If we're still not quite ready for an RC, I'd like to squeeze in
> > > > https://issues.apache.org/jira/browse/KAFKA-9601, which removes the
> raw
> > > > connector config properties in a DEBUG level log message. PR is ready
> > > (test
> > > > failures are unrelated), the risk is very low, and I think it'd be
> great
> > > to
> > > > correct this sooner than later.
> > > >
> > > > Randall
> > > >
> > > > On Wed, Feb 26, 2020 at 11:26 AM David Arthur 
> wrote:
> > > >
> > > > > Viktor, the change LGTM. I've approved and merged the cherry-pick
> > > version
> > > > > into 2.5.
> > > > >
> > > > > Thanks!
> > > > > David
> > > > >
> > > > > On Tue, Feb 25, 2020 at 4:43 AM Viktor Somogyi-Vass <
> > > > > viktorsomo...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Hi David,
> > > > > >
> > > > > > There are two short JIRAs related to KIP-352 that documents the
> newly
> > > > > added
> > > > > > metrics. Is it possible to merge them in?
> > > > > > https://github.com/apache/kafka/pull/7434 (trunk)
> > > > > > https://github.com/apache/kafka/pull/8127 (2.5 cherry-pick)
> > > > > >
> > > > > > Thanks,
> > > > > > Viktor
> > > > > >
> > > > > >
> > > > > > On Mon, Feb 24, 2020 at 7:22 PM David Arthur 
> > > wrote:
> > > > > >
> > > > > > > Thanks, Tu. I've moved KIP-467 out of the release plan.
> > > > > > >
> > > > > > > -David
> > > > > > >
> > > > > > > On Thu, Feb 20, 2020 at 6:00 PM Tu Tran 
> wrote:
> > > > > > >
> > > > > > > > Hi David,
> > > > > > > >
> > > > > > > > Thanks for being the release main driver. Since the
> > > implementation
> > > > > for
> > > > > > > the
> > > > > > > > last part of KIP-467 wasn't finalized prior to Feb 12th,
> could
> > > you
> > > > > > remove
> > > > > > > > KIP-467 from the list?
> > > > > > > >
> > > > > > > > Thanks,
> > > > > > > > Tu
> > > > > > > >
> > > > > > > > On Thu, Feb 20, 2020 at 7:18 AM David Arthur <
> mum...@gmail.com>
> > > > > wrote:
> > > > > > > >
> > > > > > > > > Randall / Konstantine,
> > > > > > > > >
> > > > > > > > > Sorry for the late reply. Thanks for the fix and for the
> > > update!
> > > > I
> > > > > > see
> > > > > > > > this
> > > > > > > > > change on the 2.5 branch (@b403c66). Consider this a
> > > retroactive
> > > > > > > approval
> > > > > > > > > for this bugfix :)
> > > > > > > > >
> > > > > > > > > -David
> > > > > > > > >
> > > > > > > > > On Fri, Feb 14, 2020 at 2:21 PM Konstantine Karantasis <
> > > > > > > > > konstant...@confluent.io> wrote:
> > > > > > > > >
> > > > > > > > > > Hi David,
> > > > > > > > > >
> > > > > > > > > > I want to confirm what Randall mentions above. The code
> fixes
> > > > for
> > > > > > > > > > KAFKA-9556 were in place before code freeze on Wed, but
> we
> > > > spent
> > > > > a
> > > > > > > bit
> > > > > > > > > more
> > > > > > > > > > time hardening the conditions of the integration tests
> and
> > > > fixing
> > > > > > > some
> > > > > > > > > > jenkins branch builders to run the test on repeat.
> > > > > > > > > >
> > > > > > > > > > Best,
> > > > > > > > > > Konstantine
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > On Fri, Feb 14, 2020 at 7:42 AM Randall Hauch <
> > > > rha...@gmail.com>
> > > > > > > > wrote:
> > > > > > > > > >
> > > > > > > > > > > Hi, David.
> > > > > > > > > > >
> > > > > > > > > > > I just filed
> > > > https://issues.apache.org/jira/browse/KAFKA-9556
> > > > > > that
> > > > > > > > > > > identifies two pretty minor issues with the new KIP-558
> > > that
> > > > > adds
> > > > > > > new
> > > > > > > > > > > Connect REST API endpoints to get the list of topics
> used
> > > by
> > > > a
> > > > > > > > > connector.
> > > > > > > > > > > The impact is high: the feature cannot be fully
> disabled,
> > > and
> > > > > > > Connect
> > > > > > > > > > does
> > > > 

Re: [DISCUSS] Apache Kafka 2.5.0 release

2020-03-09 Thread Bruno Cadonna
Hi David,

A bug report was filed that can be considered a blocker. Basically,
with this bug all RocksDB metrics reported by Streams are constant
zero. The bug is there since 2.4., so it is not a regression, but a
broken feature.

Here is the ticket: https://issues.apache.org/jira/browse/KAFKA-9675
Here is the fix: https://github.com/apache/kafka/pull/8256

Best,
Bruno

On Wed, Feb 26, 2020 at 11:22 PM Randall Hauch  wrote:
>
> Thanks, David. The PR has been merged to trunk and 2.5, and I'm backporting
> to earlier branches. I'll resolve
> https://issues.apache.org/jira/browse/KAFKA-9601 when I finish backporting.
>
> On Wed, Feb 26, 2020 at 1:28 PM David Arthur  wrote:
>
> > Thanks, Randall. Leaking sensitive config to the logs seems fairly
> > severe. I think should include this. Let's proceed with cherry-picking to
> > 2.5.
> >
> > -David
> >
> > On Wed, Feb 26, 2020 at 2:25 PM Randall Hauch  wrote:
> >
> > > Hi, David.
> > >
> > > If we're still not quite ready for an RC, I'd like to squeeze in
> > > https://issues.apache.org/jira/browse/KAFKA-9601, which removes the raw
> > > connector config properties in a DEBUG level log message. PR is ready
> > (test
> > > failures are unrelated), the risk is very low, and I think it'd be great
> > to
> > > correct this sooner than later.
> > >
> > > Randall
> > >
> > > On Wed, Feb 26, 2020 at 11:26 AM David Arthur  wrote:
> > >
> > > > Viktor, the change LGTM. I've approved and merged the cherry-pick
> > version
> > > > into 2.5.
> > > >
> > > > Thanks!
> > > > David
> > > >
> > > > On Tue, Feb 25, 2020 at 4:43 AM Viktor Somogyi-Vass <
> > > > viktorsomo...@gmail.com>
> > > > wrote:
> > > >
> > > > > Hi David,
> > > > >
> > > > > There are two short JIRAs related to KIP-352 that documents the newly
> > > > added
> > > > > metrics. Is it possible to merge them in?
> > > > > https://github.com/apache/kafka/pull/7434 (trunk)
> > > > > https://github.com/apache/kafka/pull/8127 (2.5 cherry-pick)
> > > > >
> > > > > Thanks,
> > > > > Viktor
> > > > >
> > > > >
> > > > > On Mon, Feb 24, 2020 at 7:22 PM David Arthur 
> > wrote:
> > > > >
> > > > > > Thanks, Tu. I've moved KIP-467 out of the release plan.
> > > > > >
> > > > > > -David
> > > > > >
> > > > > > On Thu, Feb 20, 2020 at 6:00 PM Tu Tran  wrote:
> > > > > >
> > > > > > > Hi David,
> > > > > > >
> > > > > > > Thanks for being the release main driver. Since the
> > implementation
> > > > for
> > > > > > the
> > > > > > > last part of KIP-467 wasn't finalized prior to Feb 12th, could
> > you
> > > > > remove
> > > > > > > KIP-467 from the list?
> > > > > > >
> > > > > > > Thanks,
> > > > > > > Tu
> > > > > > >
> > > > > > > On Thu, Feb 20, 2020 at 7:18 AM David Arthur 
> > > > wrote:
> > > > > > >
> > > > > > > > Randall / Konstantine,
> > > > > > > >
> > > > > > > > Sorry for the late reply. Thanks for the fix and for the
> > update!
> > > I
> > > > > see
> > > > > > > this
> > > > > > > > change on the 2.5 branch (@b403c66). Consider this a
> > retroactive
> > > > > > approval
> > > > > > > > for this bugfix :)
> > > > > > > >
> > > > > > > > -David
> > > > > > > >
> > > > > > > > On Fri, Feb 14, 2020 at 2:21 PM Konstantine Karantasis <
> > > > > > > > konstant...@confluent.io> wrote:
> > > > > > > >
> > > > > > > > > Hi David,
> > > > > > > > >
> > > > > > > > > I want to confirm what Randall mentions above. The code fixes
> > > for
> > > > > > > > > KAFKA-9556 were in place before code freeze on Wed, but we
> > > spent
> > > > a
> > > > > > bit
> > > > > > > > more
> > > > > > > > > time hardening the conditions of the integration tests and
> > > fixing
> > > > > > some
> > > > > > > > > jenkins branch builders to run the test on repeat.
> > > > > > > > >
> > > > > > > > > Best,
> > > > > > > > > Konstantine
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > On Fri, Feb 14, 2020 at 7:42 AM Randall Hauch <
> > > rha...@gmail.com>
> > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > > Hi, David.
> > > > > > > > > >
> > > > > > > > > > I just filed
> > > https://issues.apache.org/jira/browse/KAFKA-9556
> > > > > that
> > > > > > > > > > identifies two pretty minor issues with the new KIP-558
> > that
> > > > adds
> > > > > > new
> > > > > > > > > > Connect REST API endpoints to get the list of topics used
> > by
> > > a
> > > > > > > > connector.
> > > > > > > > > > The impact is high: the feature cannot be fully disabled,
> > and
> > > > > > Connect
> > > > > > > > > does
> > > > > > > > > > not automatically reset the topic set when a connector is
> > > > > deleted.
> > > > > > > > > > https://github.com/apache/kafka/pull/8085 includes the two
> > > > > fixes,
> > > > > > > and
> > > > > > > > > also
> > > > > > > > > > adds more unit and integration tests for this feature.
> > > > Although I
> > > > > > > just
> > > > > > > > > > created the blocker this AM, Konstantine has actually be
> > > > working
> > > > > on
> > > > > > > the
> > > > > > > > > fix
> > > > > > > > > > for four days. Risk of merging

Re: Subject: [VOTE] 2.4.1 RC0

2020-03-09 Thread Bill Bejeck
Thanks to everyone for voting.

A new blocker has surfaced https://issues.apache.org/jira/browse/KAFKA-9675,
so I'll do another RC soon.

Thanks again.
Bill

On Mon, Mar 9, 2020 at 1:35 PM Levani Kokhreidze 
wrote:

> +1 non-binding.
>
> - Built from source
> - Ran unit tests. All passed.
> - Quickstart passed.
>
> Looking forward upgrading to 2.4.1
>
> Regards,
> Levani
>
> On Mon, 9 Mar 2020, 17:11 Sean Glover,  wrote:
>
> > +1 (non-binding).  I built from source and ran the unit test suite
> > successfully.
> >
> > Thanks for running this release.  I'm looking forward to upgrading to
> > 2.4.1.
> >
> > Sean
> >
> > On Mon, Mar 9, 2020 at 8:07 AM Mickael Maison 
> > wrote:
> >
> > > Thanks for running the release!
> > > +1 (binding)
> > >
> > > - Verified signatures
> > > - Built from source
> > > - Ran unit tests, all passed
> > > - Ran through quickstart steps, all worked
> > >
> > > On Mon, Mar 9, 2020 at 11:04 AM Tom Bentley 
> wrote:
> > > >
> > > > +1 (non-binding)
> > > >
> > > > Built from source, all unit tests passed.
> > > >
> > > > Thanks Bill.
> > > >
> > > > On Mon, Mar 9, 2020 at 3:44 AM Gwen Shapira 
> wrote:
> > > >
> > > > > +1 (binding)
> > > > >
> > > > > Verified signatures, built jars from source, quickstart passed and
> > > local
> > > > > unit tests all passed.
> > > > >
> > > > > Thank you for the release Bill!
> > > > >
> > > > > Gwen Shapira
> > > > > Engineering Manager | Confluent
> > > > > 650.450.2760 | @gwenshap
> > > > > Follow us: Twitter | blog
> > > > >
> > > > > On Sat, Mar 07, 2020 at 8:15 PM, Vahid Hashemian <
> > > > > vahid.hashem...@gmail.com > wrote:
> > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > +1 (binding)
> > > > > >
> > > > > >
> > > > > >
> > > > > > Verified signature, built from source, and ran quickstart
> > > successfully
> > > > > > (using openjdk version "11.0.6"). I also ran unit tests locally
> > which
> > > > > > resulted in a few flaky tests for which there are already open
> > Jiras:
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> ReassignPartitionsClusterTest.shouldMoveSinglePartitionWithinBroker
> > > > > > ConsumerBounceTest.testCloseDuringRebalance
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > >
> > >
> >
> ConsumerBounceTest.testConsumerReceivesFatalExceptionWhenGroupPassesMaxSize
> > > > > >
> > >
> PlaintextEndToEndAuthorizationTest.testNoConsumeWithDescribeAclViaAssign
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > >
> > >
> >
> SaslClientsWithInvalidCredentialsTest.testManualAssignmentConsumerWithAuthenticationFailure
> > > > > > SaslMultiMechanismConsumerTest.testCoordinatorFailover
> > > > > >
> > > > > >
> > > > > >
> > > > > > Thanks for running the release Bill.
> > > > > >
> > > > > >
> > > > > >
> > > > > > Regards,
> > > > > > --Vahid
> > > > > >
> > > > > >
> > > > > >
> > > > > > On Fri, Mar 6, 2020 at 9:20 AM Colin McCabe < cmccabe@ apache.
> > org (
> > > > > > cmcc...@apache.org ) > wrote:
> > > > > >
> > > > > >
> > > > > >>
> > > > > >>
> > > > > >> +1 (binding)
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> Checked the git hash and branch, looked at the docs a bit. Ran
> > > > > quickstart
> > > > > >> (although not the connect or streams parts). Looks good.
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> best,
> > > > > >> Colin
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> On Fri, Mar 6, 2020, at 07:31, David Arthur wrote:
> > > > > >>
> > > > > >>
> > > > > >>>
> > > > > >>>
> > > > > >>> +1 (binding)
> > > > > >>>
> > > > > >>>
> > > > > >>>
> > > > > >>> Download kafka_2.13-2.4.1 and verified signature, ran
> quickstart,
> > > > > >>> everything looks good.
> > > > > >>>
> > > > > >>>
> > > > > >>>
> > > > > >>> Thanks for running this release, Bill!
> > > > > >>>
> > > > > >>>
> > > > > >>>
> > > > > >>> -David
> > > > > >>>
> > > > > >>>
> > > > > >>>
> > > > > >>> On Wed, Mar 4, 2020 at 6:06 AM Eno Thereska < eno. thereska@
> > > gmail.
> > > > > com (
> > > > > >>> eno.there...@gmail.com ) >
> > > > > >>>
> > > > > >>>
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> wrote:
> > > > > >>
> > > > > >>
> > > > > >>>
> > > > > 
> > > > > 
> > > > >  Hi Bill,
> > > > > 
> > > > > 
> > > > > 
> > > > >  I built from source and ran unit and integration tests. They
> > > passed.
> > > > > There
> > > > >  was a large number of skipped tests, but I'm assuming that is
> > > > > intentional.
> > > > > 
> > > > > 
> > > > > 
> > > > > 
> > > > >  Cheers
> > > > >  Eno
> > > > > 
> > > > > 
> > > > > 
> > > > >  On Tue, Mar 3, 2020 at 8:42 PM Eric Lalonde < eric@
> autonomic.
> > > ai (
> > > > >  e...@autonomic.ai ) > wrote:
> > > > > 
> > > > > 
> > > > > >
> > > > > >
> > > > > > Hi,
> > > > > >
> > > > > >
> > > > > >
> > > > > > I ran:
> > > > > > $
> > > > > >
> > > > > >
> > > > > 
> > > > > 
> > > > > >>>
> > > > > >>>
> > > > > >>
> > > 

Build failed in Jenkins: kafka-trunk-jdk11 #1223

2020-03-09 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Remove throttling logic from RecordAccumulator (#7195)


--
[...truncated 2.90 MB...]
org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apach

Build failed in Jenkins: kafka-trunk-jdk8 #4304

2020-03-09 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Remove throttling logic from RecordAccumulator (#7195)


--
[...truncated 2.89 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> Task

Re: 回复:回复:[Vote] KIP-571: Add option to force remove members in StreamsResetter

2020-03-09 Thread Sophie Blee-Goldman
Hey Feyman,

1) Regarding point 2 in your last email, if I understand correctly you
propose to change
the current behavior of the reset tool when --force is not specified, and
wait for (up to)
the session timeout for all members to be removed. I'm not sure we should
change this,
especially now that we have a better way to handle the case when the group
is not empty:
we should continue to throw an exception and fail fast, but can print a
message suggesting
to use the new --force option to remove remaining group members. Why make
users wait
for the session timeout when we've just added a new feature that means they
don't have to?

2) Regarding Matthias' question:

> I am really wondering, if for a static group, we should allow users to
remove individual members? For a dynamic group this feature would not
make much sense IMHO, because the `memberId` is not know by the user.

I think his point is similar to what I was trying to get at earlier, with
the proposal to add a new
#removeAllMembers API rather than an API to remove individual members
according to their
memberId. As he explained, removing based on memberId is likely not that
useful in general.
Also, it's not actually what we want to do here; maybe we should avoid
adding a new API
that we *think* may be useful in other contexts (remove individual member
based on memberId),
and just add the API we actually need (remove *all *members from group) in
this KIP? We can
always add the "remove individual member by memberId" API at a later point,
if it turns out to
actually be requested for specific reasons?

Also, it's more efficient to just send a single "clear the group" request
vs sending a LeaveGroup
request for every single member. What do you think?




On Sat, Mar 7, 2020 at 1:41 AM feyman2009 
wrote:

> Hi, Matthias
> Thanks, I updated the KIP to mention the deprecated and newly added
> methods.
>
> 1. What happens is `groupInstanceId` is used for a dynamic group? What
> happens if both parameters are specified? What happens if `memberId`
> is specified for a static group?
>
> => my understanding is that the dynamic/static membership is member level
> other than group level, and I think above questions could be answered by
> the "Leave Group Logic Change" section in KIP-345:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-345%3A+Introduce+static+membership+protocol+to+reduce+consumer+rebalances,
> this KIP stays consistent with KIP-345.
>
> 2. About the `--force` option. Currently, StreamsResetter fails with an
> error if the consumer group is not empty. You state in your KIP that:
>
> > without --force, we need to wait for session timeout.
>
> Is this an intended behavior change if `--force` is not used or is the
> KIP description incorrect?
>
> => This is the intended behavior. For this part, I think there are two
> ways to go:
> 1) (the implicit way) Not introducing the new "--force" option, with this
> KIP, StreamsResetter will by default remove active members(with long
> session timeout configured) on broker side
> 2) (the explicit way) Introduce the new "--force" option, users need to
> explicitly specify --force to remove the active members. If --force not
> specified, the StreamsResetter behaviour is as previous versions'.
>
> I think the two alternatives above are both feasible, personally I prefer
> way 2.
>
> 3. For my own understanding: with the `--force` option, we intend to get
> all `memberIds` and send a "remove member" request for each with
> corresponding `memberId` to remove the member from the group
> (independent is the group is static or dynamic)?
>
> => Yeah, minor thing to mention is we will send the "remove member"
> request for each member(could be dynamic member or static member) to remove
> them from group
> for dynamic members, both "group.instance.id" and "member.id" will be
> specified
> for dynamic members, only "member.id" will be specified
>
> 4. I am really wondering, if for a static group, we should allow users to
> remove individual members? For a dynamic group this feature would not
> make much sense IMHO, because the `memberId` is not know by the user.
>
> => KIP-345 introduced the batch removal feature for both static member and
> dynamic member, my understanding is that "allow users to
> remove individual members" could be useful for rolling bounce and scale
> down accoding to KIP-345. KafkaAdminClient currently only support static
> member removal and this KIP-571 enables dynamic member removal for it,
> which is also consistent with the broker side logic. Users could get the
> member.id (and group.instance.id if for static member) by
> adminClient.describeConsumerGroups.
>
> Furthermore, I don't have an assumption that a consumer group should
> contain only static or dynamic members, and I think KIP-345 and this KIP
> don't need to be based on this assumption.
> You could correct me if I have the wrong understanding :)
>
> Thanks!
> Feyman
>
>
>
>
>
>
>
> 

[jira] [Created] (KAFKA-9688) kafka-topic.sh should show KIP-455 adding and removing replicas

2020-03-09 Thread Colin McCabe (Jira)
Colin McCabe created KAFKA-9688:
---

 Summary: kafka-topic.sh should show KIP-455 adding and removing 
replicas
 Key: KAFKA-9688
 URL: https://issues.apache.org/jira/browse/KAFKA-9688
 Project: Kafka
  Issue Type: Improvement
  Components: tools
Reporter: Colin McCabe
Assignee: Colin McCabe


kafka-topic.sh should show KIP-455 adding and removing replicas, as described 
in KIP-455.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-2.3-jdk8 #183

2020-03-09 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-8245: Fix Flaky Test


--
[...truncated 2.95 MB...]
kafka.log.LogCleanerTest > testPartialSegmentClean STARTED

kafka.log.LogCleanerTest > testPartialSegmentClean PASSED

kafka.log.LogCleanerTest > testCommitMarkerRemoval STARTED

kafka.log.LogCleanerTest > testCommitMarkerRemoval PASSED

kafka.log.LogCleanerTest > testCleanSegmentsWithConcurrentSegmentDeletion 
STARTED

kafka.log.LogCleanerTest > testCleanSegmentsWithConcurrentSegmentDeletion PASSED

kafka.log.LogValidatorTest > testRecompressedBatchWithoutRecordsNotAllowed 
STARTED

kafka.log.LogValidatorTest > testRecompressedBatchWithoutRecordsNotAllowed 
PASSED

kafka.log.LogValidatorTest > testCompressedV1 STARTED

kafka.log.LogValidatorTest > testCompressedV1 PASSED

kafka.log.LogValidatorTest > testCompressedV2 STARTED

kafka.log.LogValidatorTest > testCompressedV2 PASSED

kafka.log.LogValidatorTest > testDownConversionOfIdempotentRecordsNotPermitted 
STARTED

kafka.log.LogValidatorTest > testDownConversionOfIdempotentRecordsNotPermitted 
PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV2NonCompressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV2NonCompressed PASSED

kafka.log.LogValidatorTest > testAbsoluteOffsetAssignmentCompressed STARTED

kafka.log.LogValidatorTest > testAbsoluteOffsetAssignmentCompressed PASSED

kafka.log.LogValidatorTest > testLogAppendTimeWithRecompressionV1 STARTED

kafka.log.LogValidatorTest > testLogAppendTimeWithRecompressionV1 PASSED

kafka.log.LogValidatorTest > testLogAppendTimeWithRecompressionV2 STARTED

kafka.log.LogValidatorTest > testLogAppendTimeWithRecompressionV2 PASSED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV0ToV1 STARTED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV0ToV1 PASSED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV0ToV2 STARTED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV0ToV2 PASSED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV1ToV2 STARTED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV1ToV2 PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV0Compressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV0Compressed PASSED

kafka.log.LogValidatorTest > testZStdCompressedWithUnavailableIBPVersion STARTED

kafka.log.LogValidatorTest > testZStdCompressedWithUnavailableIBPVersion PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV1ToV2Compressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV1ToV2Compressed PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV1NonCompressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV1NonCompressed PASSED

kafka.log.LogValidatorTest > 
testDownConversionOfTransactionalRecordsNotPermitted STARTED

kafka.log.LogValidatorTest > 
testDownConversionOfTransactionalRecordsNotPermitted PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV1Compressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV1Compressed PASSED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentNonCompressedV1 STARTED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentNonCompressedV1 PASSED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentNonCompressedV2 STARTED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentNonCompressedV2 PASSED

kafka.log.LogValidatorTest > testControlRecordsNotAllowedFromClients STARTED

kafka.log.LogValidatorTest > testControlRecordsNotAllowedFromClients PASSED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentCompressedV1 STARTED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentCompressedV1 PASSED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentCompressedV2 STARTED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentCompressedV2 PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV1NonCompressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV1NonCompressed PASSED

kafka.log.LogValidatorTest > testLogAppendTimeNonCompressedV1 STARTED

kafka.log.LogValidatorTest > testLogAppendTimeNonCompressedV1 PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV0NonCompressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV0NonCompressed PASSED

kafka.log.LogValidatorTest > testControlRecordsNotCompressed STARTED

kafka.log.LogValidatorTest > testControlRecordsNotCompressed PASSED

kafka.log.LogValidatorTest > testInvalidCreateTimeNonCompressedV1 STARTED

kafka.log.LogValidatorTest > testInvalidCreateTimeNonCompressedV1 PASSED

kafka.log.LogValidatorTest > testInvalidCreateTimeNo

Re: [DISCUSS] KIP-574: CLI Dynamic Configuration with file input

2020-03-09 Thread Aneel Nazareth
Hi Colin,

Thanks for the suggestion. Using a dash does seem reasonable. I can
make that change.

On Mon, Mar 9, 2020 at 12:35 PM Colin McCabe  wrote:
>
> Hi Aneel,
>
> Thanks for the KIP.  I like the idea.
>
> You mention that "input from STDIN can be used instead of a file on disk."  
> The example given in the KIP seems to suggest that the command defaults to 
> reading from STDIN if no argument is given to --add-config-file.
>
> I would argue against this particular command-line pattern.  From the user's 
> point of view, if they mess up and forget to supply an argument, or for some 
> reason the parser doesn't treat something like an argument, the program will 
> appear to hang in a confusing way.
>
> Instead, it would be better to follow the traditional UNIX pattern where a 
> dash indicates that STDIN should be read.  So "--add-config-file -" would 
> indicate that the program should read form STDIN.  This would be difficult to 
> trigger accidentally, and more in line with the traditional conventions.
>
> On Mon, Mar 9, 2020, at 08:47, David Jacot wrote:
> > I wonder if we should also add a `--delete-config-file` as a counterpart of
> > `--add-config-file`. It would be a bit weird to use a properties file in
> > this case as the values are not necessary but it may be handy to have the
> > possibility to remove the configurations which have been set. Have you
> > considered this?
>
> Hi David,
>
> That's an interesting idea.  However, I think it might be confusing to users 
> to supply a file, and then have the values supplied in that file be ignored.  
> Is there really a case where we need to do this (as opposed to creating a 
> file with blank values, or just passing the keys to --delete-config?
>
> best,
> Colin
>
> >
> > David
> >
> > On Thu, Feb 27, 2020 at 11:15 PM Aneel Nazareth  wrote:
> >
> > > I've created a PR for a potential implementation of this:
> > > https://github.com/apache/kafka/pull/8184 if we decide to go ahead with
> > > this KIP.
> > >
> > > On Wed, Feb 26, 2020 at 12:36 PM Aneel Nazareth 
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > I'd like to discuss adding a new argument to kafka-configs.sh
> > > > (ConfigCommand.scala).
> > > >
> > > > Recently I've been working on some things that require complex
> > > > configurations. I've chosen to represent them as JSON strings in my
> > > > server.properties. This works well, and I'm able to update the
> > > > configurations by editing server.properties and restarting the broker.
> > > I've
> > > > added the ability to dynamically configure them, and that works well
> > > using
> > > > the AdminClient. However, when I try to update these configurations 
> > > > using
> > > > kafka-configs.sh, I run into a problem. My configurations contain 
> > > > commas,
> > > > and kafka-configs.sh tries to break them up into key/value pairs at the
> > > > comma boundary.
> > > >
> > > > I'd like to enable setting these configurations from the command line, 
> > > > so
> > > > I'm proposing that we add a new option to kafka-configs.sh that takes a
> > > > properties file.
> > > >
> > > > I've created a KIP for this idea:
> > > >
> > > >
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-574%3A+CLI+Dynamic+Configuration+with+file+input
> > > > And a JIRA: https://issues.apache.org/jira/browse/KAFKA-9612
> > > >
> > > > I'd appreciate your feedback on the proposal.
> > > >
> > > > Thanks,
> > > > Aneel
> > > >
> > >
> >


Re: [DISCUSS] KIP-574: CLI Dynamic Configuration with file input

2020-03-09 Thread Aneel Nazareth
Hi David,

Is the expected behavior that the keys are deleted without checking the values?

Let's say I had this file new.properties:
a=1
b=2

And ran:

bin/kafka-configs --bootstrap-server localhost:9092 \
  --entity-type brokers --entity-default \
  --alter --add-config-file new.properties

It seems clear what should happen if I run this immediately:

bin/kafka-configs --bootstrap-server localhost:9092 \
  --entity-type brokers --entity-default \
  --alter --delete-config-file new.properties

(Namely that both a and b would now have no values in the config)

But what if this were run in-between:

bin/kafka-configs --bootstrap-server localhost:9092 \
  --entity-type brokers --entity-default \
  --alter --add-config a=3

Would it be surprising if the key/value pair a=3 was deleted, even
though the config that is in the file is a=1? Or would that be
expected?

On Mon, Mar 9, 2020 at 1:02 PM David Jacot  wrote:
>
> Hi Colin,
>
> Yes, you're right. This is weird but convenient because you don't have to
> duplicate
> the "keys". I was thinking about the kubernetes API which allows to create
> a Pod
> based on a file and allows to delete it as well with the same file. I have
> always found
> this convenient, especially when doing local tests.
>
> Best,
> David
>
> On Mon, Mar 9, 2020 at 6:35 PM Colin McCabe  wrote:
>
> > Hi Aneel,
> >
> > Thanks for the KIP.  I like the idea.
> >
> > You mention that "input from STDIN can be used instead of a file on
> > disk."  The example given in the KIP seems to suggest that the command
> > defaults to reading from STDIN if no argument is given to --add-config-file.
> >
> > I would argue against this particular command-line pattern.  From the
> > user's point of view, if they mess up and forget to supply an argument, or
> > for some reason the parser doesn't treat something like an argument, the
> > program will appear to hang in a confusing way.
> >
> > Instead, it would be better to follow the traditional UNIX pattern where a
> > dash indicates that STDIN should be read.  So "--add-config-file -" would
> > indicate that the program should read form STDIN.  This would be difficult
> > to trigger accidentally, and more in line with the traditional conventions.
> >
> > On Mon, Mar 9, 2020, at 08:47, David Jacot wrote:
> > > I wonder if we should also add a `--delete-config-file` as a counterpart
> > of
> > > `--add-config-file`. It would be a bit weird to use a properties file in
> > > this case as the values are not necessary but it may be handy to have the
> > > possibility to remove the configurations which have been set. Have you
> > > considered this?
> >
> > Hi David,
> >
> > That's an interesting idea.  However, I think it might be confusing to
> > users to supply a file, and then have the values supplied in that file be
> > ignored.  Is there really a case where we need to do this (as opposed to
> > creating a file with blank values, or just passing the keys to
> > --delete-config?
> >
> > best,
> > Colin
> >
> > >
> > > David
> > >
> > > On Thu, Feb 27, 2020 at 11:15 PM Aneel Nazareth 
> > wrote:
> > >
> > > > I've created a PR for a potential implementation of this:
> > > > https://github.com/apache/kafka/pull/8184 if we decide to go ahead
> > with
> > > > this KIP.
> > > >
> > > > On Wed, Feb 26, 2020 at 12:36 PM Aneel Nazareth 
> > > > wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > I'd like to discuss adding a new argument to kafka-configs.sh
> > > > > (ConfigCommand.scala).
> > > > >
> > > > > Recently I've been working on some things that require complex
> > > > > configurations. I've chosen to represent them as JSON strings in my
> > > > > server.properties. This works well, and I'm able to update the
> > > > > configurations by editing server.properties and restarting the
> > broker.
> > > > I've
> > > > > added the ability to dynamically configure them, and that works well
> > > > using
> > > > > the AdminClient. However, when I try to update these configurations
> > using
> > > > > kafka-configs.sh, I run into a problem. My configurations contain
> > commas,
> > > > > and kafka-configs.sh tries to break them up into key/value pairs at
> > the
> > > > > comma boundary.
> > > > >
> > > > > I'd like to enable setting these configurations from the command
> > line, so
> > > > > I'm proposing that we add a new option to kafka-configs.sh that
> > takes a
> > > > > properties file.
> > > > >
> > > > > I've created a KIP for this idea:
> > > > >
> > > > >
> > > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-574%3A+CLI+Dynamic+Configuration+with+file+input
> > > > > And a JIRA: https://issues.apache.org/jira/browse/KAFKA-9612
> > > > >
> > > > > I'd appreciate your feedback on the proposal.
> > > > >
> > > > > Thanks,
> > > > > Aneel
> > > > >
> > > >
> > >
> >


Jenkins build is back to normal : kafka-2.2-jdk8 #33

2020-03-09 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-574: CLI Dynamic Configuration with file input

2020-03-09 Thread David Jacot
Hi Colin,

Yes, you're right. This is weird but convenient because you don't have to
duplicate
the "keys". I was thinking about the kubernetes API which allows to create
a Pod
based on a file and allows to delete it as well with the same file. I have
always found
this convenient, especially when doing local tests.

Best,
David

On Mon, Mar 9, 2020 at 6:35 PM Colin McCabe  wrote:

> Hi Aneel,
>
> Thanks for the KIP.  I like the idea.
>
> You mention that "input from STDIN can be used instead of a file on
> disk."  The example given in the KIP seems to suggest that the command
> defaults to reading from STDIN if no argument is given to --add-config-file.
>
> I would argue against this particular command-line pattern.  From the
> user's point of view, if they mess up and forget to supply an argument, or
> for some reason the parser doesn't treat something like an argument, the
> program will appear to hang in a confusing way.
>
> Instead, it would be better to follow the traditional UNIX pattern where a
> dash indicates that STDIN should be read.  So "--add-config-file -" would
> indicate that the program should read form STDIN.  This would be difficult
> to trigger accidentally, and more in line with the traditional conventions.
>
> On Mon, Mar 9, 2020, at 08:47, David Jacot wrote:
> > I wonder if we should also add a `--delete-config-file` as a counterpart
> of
> > `--add-config-file`. It would be a bit weird to use a properties file in
> > this case as the values are not necessary but it may be handy to have the
> > possibility to remove the configurations which have been set. Have you
> > considered this?
>
> Hi David,
>
> That's an interesting idea.  However, I think it might be confusing to
> users to supply a file, and then have the values supplied in that file be
> ignored.  Is there really a case where we need to do this (as opposed to
> creating a file with blank values, or just passing the keys to
> --delete-config?
>
> best,
> Colin
>
> >
> > David
> >
> > On Thu, Feb 27, 2020 at 11:15 PM Aneel Nazareth 
> wrote:
> >
> > > I've created a PR for a potential implementation of this:
> > > https://github.com/apache/kafka/pull/8184 if we decide to go ahead
> with
> > > this KIP.
> > >
> > > On Wed, Feb 26, 2020 at 12:36 PM Aneel Nazareth 
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > I'd like to discuss adding a new argument to kafka-configs.sh
> > > > (ConfigCommand.scala).
> > > >
> > > > Recently I've been working on some things that require complex
> > > > configurations. I've chosen to represent them as JSON strings in my
> > > > server.properties. This works well, and I'm able to update the
> > > > configurations by editing server.properties and restarting the
> broker.
> > > I've
> > > > added the ability to dynamically configure them, and that works well
> > > using
> > > > the AdminClient. However, when I try to update these configurations
> using
> > > > kafka-configs.sh, I run into a problem. My configurations contain
> commas,
> > > > and kafka-configs.sh tries to break them up into key/value pairs at
> the
> > > > comma boundary.
> > > >
> > > > I'd like to enable setting these configurations from the command
> line, so
> > > > I'm proposing that we add a new option to kafka-configs.sh that
> takes a
> > > > properties file.
> > > >
> > > > I've created a KIP for this idea:
> > > >
> > > >
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-574%3A+CLI+Dynamic+Configuration+with+file+input
> > > > And a JIRA: https://issues.apache.org/jira/browse/KAFKA-9612
> > > >
> > > > I'd appreciate your feedback on the proposal.
> > > >
> > > > Thanks,
> > > > Aneel
> > > >
> > >
> >
>


Can I ask protocol questions here?

2020-03-09 Thread Chris Done
Hi all,

I'm writing a Kafka client at the protocol level and was wondering whether 
here, or the users@ mailing list was more appropriate for questions of that 
nature?

I looked on the web site, but didn't see clarification on this point.

I'll start a fresh thread if here is indeed the correct place. I have a 
question to ask about fetch requests not working in specific conditions.

Thanks,

Chris

Re: [DISCUSS] KIP-574: CLI Dynamic Configuration with file input

2020-03-09 Thread Colin McCabe
Hi Aneel,

Thanks for the KIP.  I like the idea.

You mention that "input from STDIN can be used instead of a file on disk."  The 
example given in the KIP seems to suggest that the command defaults to reading 
from STDIN if no argument is given to --add-config-file.

I would argue against this particular command-line pattern.  From the user's 
point of view, if they mess up and forget to supply an argument, or for some 
reason the parser doesn't treat something like an argument, the program will 
appear to hang in a confusing way.

Instead, it would be better to follow the traditional UNIX pattern where a dash 
indicates that STDIN should be read.  So "--add-config-file -" would indicate 
that the program should read form STDIN.  This would be difficult to trigger 
accidentally, and more in line with the traditional conventions.

On Mon, Mar 9, 2020, at 08:47, David Jacot wrote:
> I wonder if we should also add a `--delete-config-file` as a counterpart of
> `--add-config-file`. It would be a bit weird to use a properties file in
> this case as the values are not necessary but it may be handy to have the
> possibility to remove the configurations which have been set. Have you
> considered this?

Hi David,

That's an interesting idea.  However, I think it might be confusing to users to 
supply a file, and then have the values supplied in that file be ignored.  Is 
there really a case where we need to do this (as opposed to creating a file 
with blank values, or just passing the keys to --delete-config?

best,
Colin

> 
> David
> 
> On Thu, Feb 27, 2020 at 11:15 PM Aneel Nazareth  wrote:
> 
> > I've created a PR for a potential implementation of this:
> > https://github.com/apache/kafka/pull/8184 if we decide to go ahead with
> > this KIP.
> >
> > On Wed, Feb 26, 2020 at 12:36 PM Aneel Nazareth 
> > wrote:
> >
> > > Hi,
> > >
> > > I'd like to discuss adding a new argument to kafka-configs.sh
> > > (ConfigCommand.scala).
> > >
> > > Recently I've been working on some things that require complex
> > > configurations. I've chosen to represent them as JSON strings in my
> > > server.properties. This works well, and I'm able to update the
> > > configurations by editing server.properties and restarting the broker.
> > I've
> > > added the ability to dynamically configure them, and that works well
> > using
> > > the AdminClient. However, when I try to update these configurations using
> > > kafka-configs.sh, I run into a problem. My configurations contain commas,
> > > and kafka-configs.sh tries to break them up into key/value pairs at the
> > > comma boundary.
> > >
> > > I'd like to enable setting these configurations from the command line, so
> > > I'm proposing that we add a new option to kafka-configs.sh that takes a
> > > properties file.
> > >
> > > I've created a KIP for this idea:
> > >
> > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-574%3A+CLI+Dynamic+Configuration+with+file+input
> > > And a JIRA: https://issues.apache.org/jira/browse/KAFKA-9612
> > >
> > > I'd appreciate your feedback on the proposal.
> > >
> > > Thanks,
> > > Aneel
> > >
> >
>


Re: Subject: [VOTE] 2.4.1 RC0

2020-03-09 Thread Levani Kokhreidze
+1 non-binding.

- Built from source
- Ran unit tests. All passed.
- Quickstart passed.

Looking forward upgrading to 2.4.1

Regards,
Levani

On Mon, 9 Mar 2020, 17:11 Sean Glover,  wrote:

> +1 (non-binding).  I built from source and ran the unit test suite
> successfully.
>
> Thanks for running this release.  I'm looking forward to upgrading to
> 2.4.1.
>
> Sean
>
> On Mon, Mar 9, 2020 at 8:07 AM Mickael Maison 
> wrote:
>
> > Thanks for running the release!
> > +1 (binding)
> >
> > - Verified signatures
> > - Built from source
> > - Ran unit tests, all passed
> > - Ran through quickstart steps, all worked
> >
> > On Mon, Mar 9, 2020 at 11:04 AM Tom Bentley  wrote:
> > >
> > > +1 (non-binding)
> > >
> > > Built from source, all unit tests passed.
> > >
> > > Thanks Bill.
> > >
> > > On Mon, Mar 9, 2020 at 3:44 AM Gwen Shapira  wrote:
> > >
> > > > +1 (binding)
> > > >
> > > > Verified signatures, built jars from source, quickstart passed and
> > local
> > > > unit tests all passed.
> > > >
> > > > Thank you for the release Bill!
> > > >
> > > > Gwen Shapira
> > > > Engineering Manager | Confluent
> > > > 650.450.2760 | @gwenshap
> > > > Follow us: Twitter | blog
> > > >
> > > > On Sat, Mar 07, 2020 at 8:15 PM, Vahid Hashemian <
> > > > vahid.hashem...@gmail.com > wrote:
> > > >
> > > > >
> > > > >
> > > > >
> > > > > +1 (binding)
> > > > >
> > > > >
> > > > >
> > > > > Verified signature, built from source, and ran quickstart
> > successfully
> > > > > (using openjdk version "11.0.6"). I also ran unit tests locally
> which
> > > > > resulted in a few flaky tests for which there are already open
> Jiras:
> > > > >
> > > > >
> > > > >
> > > > > ReassignPartitionsClusterTest.shouldMoveSinglePartitionWithinBroker
> > > > > ConsumerBounceTest.testCloseDuringRebalance
> > > > >
> > > > >
> > > > >
> > > > >
> > > >
> >
> ConsumerBounceTest.testConsumerReceivesFatalExceptionWhenGroupPassesMaxSize
> > > > >
> > PlaintextEndToEndAuthorizationTest.testNoConsumeWithDescribeAclViaAssign
> > > > >
> > > > >
> > > > >
> > > > >
> > > >
> >
> SaslClientsWithInvalidCredentialsTest.testManualAssignmentConsumerWithAuthenticationFailure
> > > > > SaslMultiMechanismConsumerTest.testCoordinatorFailover
> > > > >
> > > > >
> > > > >
> > > > > Thanks for running the release Bill.
> > > > >
> > > > >
> > > > >
> > > > > Regards,
> > > > > --Vahid
> > > > >
> > > > >
> > > > >
> > > > > On Fri, Mar 6, 2020 at 9:20 AM Colin McCabe < cmccabe@ apache.
> org (
> > > > > cmcc...@apache.org ) > wrote:
> > > > >
> > > > >
> > > > >>
> > > > >>
> > > > >> +1 (binding)
> > > > >>
> > > > >>
> > > > >>
> > > > >> Checked the git hash and branch, looked at the docs a bit. Ran
> > > > quickstart
> > > > >> (although not the connect or streams parts). Looks good.
> > > > >>
> > > > >>
> > > > >>
> > > > >> best,
> > > > >> Colin
> > > > >>
> > > > >>
> > > > >>
> > > > >> On Fri, Mar 6, 2020, at 07:31, David Arthur wrote:
> > > > >>
> > > > >>
> > > > >>>
> > > > >>>
> > > > >>> +1 (binding)
> > > > >>>
> > > > >>>
> > > > >>>
> > > > >>> Download kafka_2.13-2.4.1 and verified signature, ran quickstart,
> > > > >>> everything looks good.
> > > > >>>
> > > > >>>
> > > > >>>
> > > > >>> Thanks for running this release, Bill!
> > > > >>>
> > > > >>>
> > > > >>>
> > > > >>> -David
> > > > >>>
> > > > >>>
> > > > >>>
> > > > >>> On Wed, Mar 4, 2020 at 6:06 AM Eno Thereska < eno. thereska@
> > gmail.
> > > > com (
> > > > >>> eno.there...@gmail.com ) >
> > > > >>>
> > > > >>>
> > > > >>
> > > > >>
> > > > >>
> > > > >> wrote:
> > > > >>
> > > > >>
> > > > >>>
> > > > 
> > > > 
> > > >  Hi Bill,
> > > > 
> > > > 
> > > > 
> > > >  I built from source and ran unit and integration tests. They
> > passed.
> > > > There
> > > >  was a large number of skipped tests, but I'm assuming that is
> > > > intentional.
> > > > 
> > > > 
> > > > 
> > > > 
> > > >  Cheers
> > > >  Eno
> > > > 
> > > > 
> > > > 
> > > >  On Tue, Mar 3, 2020 at 8:42 PM Eric Lalonde < eric@ autonomic.
> > ai (
> > > >  e...@autonomic.ai ) > wrote:
> > > > 
> > > > 
> > > > >
> > > > >
> > > > > Hi,
> > > > >
> > > > >
> > > > >
> > > > > I ran:
> > > > > $
> > > > >
> > > > >
> > > > 
> > > > 
> > > > >>>
> > > > >>>
> > > > >>
> > > > >>
> > > > >>
> > > > >> https:/ / github. com/ elalonde/ kafka/ blob/ master/ bin/
> > > > verify-kafka-rc.
> > > > >> sh (
> > > > https://github.com/elalonde/kafka/blob/master/bin/verify-kafka-rc.sh
> )
> > > > >>
> > > > >>
> > > > >>
> > > > >>>
> > > > 
> > > > 
> > > >  < https:/ / github. com/ elalonde/ kafka/ blob/ master/ bin/
> > > > verify-kafka-rc.
> > > >  sh (
> > > > https://github.com/elalonde/kafka/blob/master/bin/verify-kafka-rc.sh
> )
> > > >  >
> > > >  2.4.1 https:/ / home. apache. org/ ~bbejeck/ kafka-2. 4. 1-rc0 (
> > > >  https://home.apache.org/~bbejeck/kafka-2.4.1-rc0

[jira] [Created] (KAFKA-9687) Improve Trogdor workload exit logging

2020-03-09 Thread Colin McCabe (Jira)
Colin McCabe created KAFKA-9687:
---

 Summary: Improve Trogdor workload exit logging
 Key: KAFKA-9687
 URL: https://issues.apache.org/jira/browse/KAFKA-9687
 Project: Kafka
  Issue Type: Improvement
Reporter: Colin McCabe


When Trogdor workloads exit, there are often several InterruptedExceptions 
printed at INFO level in the logs.  If the InterruptedException is expected 
because the task is being cancelled deliberately or has timed out, it would be 
good to log these exceptions at DEBUG and also have a more informative message 
at INFO stating simply that the threads were exiting as planned.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Question about log flusher real frequency

2020-03-09 Thread Fares Oueslati
Hi Alexandre,

Thank you for your quick answer.

I want to monitor it cause I'm trying to find out the reason why our
existing Kafka cluster is configured to flush data every10 milliseconds!
(people who configured it are not available anymore to answer).

As that value seems really low to me, I was trying to understand and to
monitor the "flush behaviour".

Fares

On Mon, Mar 9, 2020 at 5:24 PM Alexandre Dupriez <
alexandre.dupr...@gmail.com> wrote:

> Hi Fares,
>
> On Linux kernels, you can use the property "dirty_writeback_centisecs"
> [1] to configure the period between executions of kswapd, which does
> this "sync" job. The period is usually set to 30 seconds.
> There are few exceptions where Kafka explicitly forces a sync (via the
> force() method from the I/O API of the JDK), e.g. when a segment is
> rolled or Kafka shutting down.
>
> The page writeback activity from your kernel is monitorable at
> different levels of granularity and depending on the instrumentation
> you are willing to use.
>
> Why would you want to monitor this activity in the first place? Do you
> want to know exactly *when* your data is on the disk?
>
> [1] https://www.kernel.org/doc/Documentation/sysctl/vm.txt
>
> Le lun. 9 mars 2020 à 15:58, Fares Oueslati  a
> écrit :
> >
> > Hello,
> >
> > By default, both log.flush.interval.ms and log.flush.interval.messages
> are
> > set to Long.MAX_VALUE.
> >
> > As I understand, it makes Kafka flush log to disk (fsync) only depends on
> > file system.
> >
> > Is there any simple way to monitor that frequency ?
> >
> > Is there a rule of thumb to estimate that value depending on the os ?
> >
> > Thank you guys !
> > Fares
>


Re: Question about log flusher real frequency

2020-03-09 Thread Alexandre Dupriez
Hi Fares,

On Linux kernels, you can use the property "dirty_writeback_centisecs"
[1] to configure the period between executions of kswapd, which does
this "sync" job. The period is usually set to 30 seconds.
There are few exceptions where Kafka explicitly forces a sync (via the
force() method from the I/O API of the JDK), e.g. when a segment is
rolled or Kafka shutting down.

The page writeback activity from your kernel is monitorable at
different levels of granularity and depending on the instrumentation
you are willing to use.

Why would you want to monitor this activity in the first place? Do you
want to know exactly *when* your data is on the disk?

[1] https://www.kernel.org/doc/Documentation/sysctl/vm.txt

Le lun. 9 mars 2020 à 15:58, Fares Oueslati  a écrit :
>
> Hello,
>
> By default, both log.flush.interval.ms and log.flush.interval.messages are
> set to Long.MAX_VALUE.
>
> As I understand, it makes Kafka flush log to disk (fsync) only depends on
> file system.
>
> Is there any simple way to monitor that frequency ?
>
> Is there a rule of thumb to estimate that value depending on the os ?
>
> Thank you guys !
> Fares


Question about log flusher real frequency

2020-03-09 Thread Fares Oueslati
Hello,

By default, both log.flush.interval.ms and log.flush.interval.messages are
set to Long.MAX_VALUE.

As I understand, it makes Kafka flush log to disk (fsync) only depends on
file system.

Is there any simple way to monitor that frequency ?

Is there a rule of thumb to estimate that value depending on the os ?

Thank you guys !
Fares


Re: [DISCUSS] KIP-574: CLI Dynamic Configuration with file input

2020-03-09 Thread David Jacot
Hi Aneel,

Thank you for the KIP. I agree that managing complex configurations is not
easy
with the current tool. Having the possibility to use properties file sounds
quite
handy to me. It makes it easier to edit and to reuse base configurations.

I wonder if we should also add a `--delete-config-file` as a counterpart of
`--add-config-file`. It would be a bit weird to use a properties file in
this
case as the values are not necessary but it may be handy to have the
possibility to remove the configurations which have been set. Have you
considered this?

David

On Thu, Feb 27, 2020 at 11:15 PM Aneel Nazareth  wrote:

> I've created a PR for a potential implementation of this:
> https://github.com/apache/kafka/pull/8184 if we decide to go ahead with
> this KIP.
>
> On Wed, Feb 26, 2020 at 12:36 PM Aneel Nazareth 
> wrote:
>
> > Hi,
> >
> > I'd like to discuss adding a new argument to kafka-configs.sh
> > (ConfigCommand.scala).
> >
> > Recently I've been working on some things that require complex
> > configurations. I've chosen to represent them as JSON strings in my
> > server.properties. This works well, and I'm able to update the
> > configurations by editing server.properties and restarting the broker.
> I've
> > added the ability to dynamically configure them, and that works well
> using
> > the AdminClient. However, when I try to update these configurations using
> > kafka-configs.sh, I run into a problem. My configurations contain commas,
> > and kafka-configs.sh tries to break them up into key/value pairs at the
> > comma boundary.
> >
> > I'd like to enable setting these configurations from the command line, so
> > I'm proposing that we add a new option to kafka-configs.sh that takes a
> > properties file.
> >
> > I've created a KIP for this idea:
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-574%3A+CLI+Dynamic+Configuration+with+file+input
> > And a JIRA: https://issues.apache.org/jira/browse/KAFKA-9612
> >
> > I'd appreciate your feedback on the proposal.
> >
> > Thanks,
> > Aneel
> >
>


Re: Subject: [VOTE] 2.4.1 RC0

2020-03-09 Thread Sean Glover
+1 (non-binding).  I built from source and ran the unit test suite
successfully.

Thanks for running this release.  I'm looking forward to upgrading to 2.4.1.

Sean

On Mon, Mar 9, 2020 at 8:07 AM Mickael Maison 
wrote:

> Thanks for running the release!
> +1 (binding)
>
> - Verified signatures
> - Built from source
> - Ran unit tests, all passed
> - Ran through quickstart steps, all worked
>
> On Mon, Mar 9, 2020 at 11:04 AM Tom Bentley  wrote:
> >
> > +1 (non-binding)
> >
> > Built from source, all unit tests passed.
> >
> > Thanks Bill.
> >
> > On Mon, Mar 9, 2020 at 3:44 AM Gwen Shapira  wrote:
> >
> > > +1 (binding)
> > >
> > > Verified signatures, built jars from source, quickstart passed and
> local
> > > unit tests all passed.
> > >
> > > Thank you for the release Bill!
> > >
> > > Gwen Shapira
> > > Engineering Manager | Confluent
> > > 650.450.2760 | @gwenshap
> > > Follow us: Twitter | blog
> > >
> > > On Sat, Mar 07, 2020 at 8:15 PM, Vahid Hashemian <
> > > vahid.hashem...@gmail.com > wrote:
> > >
> > > >
> > > >
> > > >
> > > > +1 (binding)
> > > >
> > > >
> > > >
> > > > Verified signature, built from source, and ran quickstart
> successfully
> > > > (using openjdk version "11.0.6"). I also ran unit tests locally which
> > > > resulted in a few flaky tests for which there are already open Jiras:
> > > >
> > > >
> > > >
> > > > ReassignPartitionsClusterTest.shouldMoveSinglePartitionWithinBroker
> > > > ConsumerBounceTest.testCloseDuringRebalance
> > > >
> > > >
> > > >
> > > >
> > >
> ConsumerBounceTest.testConsumerReceivesFatalExceptionWhenGroupPassesMaxSize
> > > >
> PlaintextEndToEndAuthorizationTest.testNoConsumeWithDescribeAclViaAssign
> > > >
> > > >
> > > >
> > > >
> > >
> SaslClientsWithInvalidCredentialsTest.testManualAssignmentConsumerWithAuthenticationFailure
> > > > SaslMultiMechanismConsumerTest.testCoordinatorFailover
> > > >
> > > >
> > > >
> > > > Thanks for running the release Bill.
> > > >
> > > >
> > > >
> > > > Regards,
> > > > --Vahid
> > > >
> > > >
> > > >
> > > > On Fri, Mar 6, 2020 at 9:20 AM Colin McCabe < cmccabe@ apache. org (
> > > > cmcc...@apache.org ) > wrote:
> > > >
> > > >
> > > >>
> > > >>
> > > >> +1 (binding)
> > > >>
> > > >>
> > > >>
> > > >> Checked the git hash and branch, looked at the docs a bit. Ran
> > > quickstart
> > > >> (although not the connect or streams parts). Looks good.
> > > >>
> > > >>
> > > >>
> > > >> best,
> > > >> Colin
> > > >>
> > > >>
> > > >>
> > > >> On Fri, Mar 6, 2020, at 07:31, David Arthur wrote:
> > > >>
> > > >>
> > > >>>
> > > >>>
> > > >>> +1 (binding)
> > > >>>
> > > >>>
> > > >>>
> > > >>> Download kafka_2.13-2.4.1 and verified signature, ran quickstart,
> > > >>> everything looks good.
> > > >>>
> > > >>>
> > > >>>
> > > >>> Thanks for running this release, Bill!
> > > >>>
> > > >>>
> > > >>>
> > > >>> -David
> > > >>>
> > > >>>
> > > >>>
> > > >>> On Wed, Mar 4, 2020 at 6:06 AM Eno Thereska < eno. thereska@
> gmail.
> > > com (
> > > >>> eno.there...@gmail.com ) >
> > > >>>
> > > >>>
> > > >>
> > > >>
> > > >>
> > > >> wrote:
> > > >>
> > > >>
> > > >>>
> > > 
> > > 
> > >  Hi Bill,
> > > 
> > > 
> > > 
> > >  I built from source and ran unit and integration tests. They
> passed.
> > > There
> > >  was a large number of skipped tests, but I'm assuming that is
> > > intentional.
> > > 
> > > 
> > > 
> > > 
> > >  Cheers
> > >  Eno
> > > 
> > > 
> > > 
> > >  On Tue, Mar 3, 2020 at 8:42 PM Eric Lalonde < eric@ autonomic.
> ai (
> > >  e...@autonomic.ai ) > wrote:
> > > 
> > > 
> > > >
> > > >
> > > > Hi,
> > > >
> > > >
> > > >
> > > > I ran:
> > > > $
> > > >
> > > >
> > > 
> > > 
> > > >>>
> > > >>>
> > > >>
> > > >>
> > > >>
> > > >> https:/ / github. com/ elalonde/ kafka/ blob/ master/ bin/
> > > verify-kafka-rc.
> > > >> sh (
> > > https://github.com/elalonde/kafka/blob/master/bin/verify-kafka-rc.sh )
> > > >>
> > > >>
> > > >>
> > > >>>
> > > 
> > > 
> > >  < https:/ / github. com/ elalonde/ kafka/ blob/ master/ bin/
> > > verify-kafka-rc.
> > >  sh (
> > > https://github.com/elalonde/kafka/blob/master/bin/verify-kafka-rc.sh )
> > >  >
> > >  2.4.1 https:/ / home. apache. org/ ~bbejeck/ kafka-2. 4. 1-rc0 (
> > >  https://home.apache.org/~bbejeck/kafka-2.4.1-rc0 ) < https:/ /
> home.
> > > apache.
> > >  org/ ~bbejeck/ kafka-2. 4. 1-rc0 (
> > >  https://home.apache.org/~bbejeck/kafka-2.4.1-rc0 ) >
> > > 
> > > 
> > > >
> > > >
> > > > All checksums and signatures are good and all unit and
> integration
> > > >
> > > >
> > > 
> > > 
> > > >>>
> > > >>>
> > > >>
> > > >>
> > > >>
> > > >> tests
> > > >>
> > > >>
> > > >>>
> > > 
> > > 
> > >  that were executed passed successfully.
> > > 
> > > 
> > > >
> > > >
> > > > - Eric
> > > >
> > > >
> > > >>
> > > >>
> > > >>

Build failed in Jenkins: kafka-2.2-jdk8-old #215

2020-03-09 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-8245: Fix Flaky Test


--
Started by an SCM change
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on H38 (ubuntu) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] Done
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/2.2^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/2.2^{commit} # timeout=10
Checking out Revision 9d71c2f373e51dc8caf4308eef4d37ece3c899b4 
(refs/remotes/origin/2.2)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 9d71c2f373e51dc8caf4308eef4d37ece3c899b4
Commit message: "KAFKA-8245: Fix Flaky Test 
DeleteConsumerGroupsTest#testDeleteCmdAllGroups (#8032) (#8243)"
 > git rev-list --no-walk 5b01f63a42b30bd41f91a1ab1334161cb8d3ff28 # timeout=10
ERROR: No tool found matching GRADLE_4_8_1_HOME
[kafka-2.2-jdk8-old] $ /bin/bash -xe /tmp/jenkins5826663244266864645.sh
+ rm -rf 
+ /bin/gradle
/tmp/jenkins5826663244266864645.sh: line 4: /bin/gradle: No such file or 
directory
Build step 'Execute shell' marked build as failure
[FINDBUGS] Collecting findbugs analysis files...
ERROR: No tool found matching GRADLE_4_8_1_HOME
[FINDBUGS] Searching for all files in 
 that match the pattern 
**/build/reports/findbugs/*.xml
[FINDBUGS] No files found. Configuration error?
ERROR: No tool found matching GRADLE_4_8_1_HOME
No credentials specified
ERROR: No tool found matching GRADLE_4_8_1_HOME
 Using GitBlamer to create author and commit information for all 
warnings.
 GIT_COMMIT=9d71c2f373e51dc8caf4308eef4d37ece3c899b4, 
workspace=
[FINDBUGS] Computing warning deltas based on reference build #175
Recording test results
ERROR: No tool found matching GRADLE_4_8_1_HOME
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
ERROR: No tool found matching GRADLE_4_8_1_HOME
Not sending mail to unregistered user j...@confluent.io
Not sending mail to unregistered user b...@confluent.io
Not sending mail to unregistered user wangg...@gmail.com
Not sending mail to unregistered user nore...@github.com
Not sending mail to unregistered user ism...@juma.me.uk


[jira] [Resolved] (KAFKA-8245) Flaky Test DeleteConsumerGroupsTest#testDeleteCmdAllGroups

2020-03-09 Thread Ismael Juma (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-8245.

Resolution: Fixed

> Flaky Test DeleteConsumerGroupsTest#testDeleteCmdAllGroups
> --
>
> Key: KAFKA-8245
> URL: https://issues.apache.org/jira/browse/KAFKA-8245
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Chia-Ping Tsai
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.1.2, 2.2.3, 2.5.0, 2.3.2, 2.4.1
>
>
> [https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/3781/testReport/junit/kafka.admin/DeleteConsumerGroupsTest/testDeleteCmdAllGroups/]
> {quote}java.lang.AssertionError: The group did become empty as expected. at 
> kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.admin.DeleteConsumerGroupsTest.testDeleteCmdAllGroups(DeleteConsumerGroupsTest.scala:148){quote}
> STDOUT
> {quote}Error: Deletion of some consumer groups failed: * Group 'test.group' 
> could not be deleted due to: java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.GroupNotEmptyException: The group is not 
> empty. Error: Deletion of some consumer groups failed: * Group 
> 'missing.group' could not be deleted due to: 
> java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.GroupIdNotFoundException: The group id does 
> not exist. [2019-04-16 09:42:02,316] WARN Unable to read additional data from 
> client sessionid 0x104f958dba3, likely client has closed socket 
> (org.apache.zookeeper.server.NIOServerCnxn:376) Deletion of requested 
> consumer groups ('test.group') was successful. Error: Deletion of some 
> consumer groups failed: * Group 'missing.group' could not be deleted due to: 
> java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.GroupIdNotFoundException: The group id does 
> not exist. These consumer groups were deleted successfully: 
> 'test.group'{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[Vote] KIP-569: Update DescribeConfigsResponse to include additional metadata information

2020-03-09 Thread Shailesh Panwar
Hi All,
I would like to start a vote on KIP-569: Update
DescribeConfigsResponse to include additional metadata information

The KIP is here:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-569%3A+DescribeConfigsResponse+-+Update+the+schema+to+include+additional+metadata+information+of+the+field

Thanks,
Shailesh


Re: Subject: [VOTE] 2.4.1 RC0

2020-03-09 Thread Mickael Maison
Thanks for running the release!
+1 (binding)

- Verified signatures
- Built from source
- Ran unit tests, all passed
- Ran through quickstart steps, all worked

On Mon, Mar 9, 2020 at 11:04 AM Tom Bentley  wrote:
>
> +1 (non-binding)
>
> Built from source, all unit tests passed.
>
> Thanks Bill.
>
> On Mon, Mar 9, 2020 at 3:44 AM Gwen Shapira  wrote:
>
> > +1 (binding)
> >
> > Verified signatures, built jars from source, quickstart passed and local
> > unit tests all passed.
> >
> > Thank you for the release Bill!
> >
> > Gwen Shapira
> > Engineering Manager | Confluent
> > 650.450.2760 | @gwenshap
> > Follow us: Twitter | blog
> >
> > On Sat, Mar 07, 2020 at 8:15 PM, Vahid Hashemian <
> > vahid.hashem...@gmail.com > wrote:
> >
> > >
> > >
> > >
> > > +1 (binding)
> > >
> > >
> > >
> > > Verified signature, built from source, and ran quickstart successfully
> > > (using openjdk version "11.0.6"). I also ran unit tests locally which
> > > resulted in a few flaky tests for which there are already open Jiras:
> > >
> > >
> > >
> > > ReassignPartitionsClusterTest.shouldMoveSinglePartitionWithinBroker
> > > ConsumerBounceTest.testCloseDuringRebalance
> > >
> > >
> > >
> > >
> > ConsumerBounceTest.testConsumerReceivesFatalExceptionWhenGroupPassesMaxSize
> > > PlaintextEndToEndAuthorizationTest.testNoConsumeWithDescribeAclViaAssign
> > >
> > >
> > >
> > >
> > SaslClientsWithInvalidCredentialsTest.testManualAssignmentConsumerWithAuthenticationFailure
> > > SaslMultiMechanismConsumerTest.testCoordinatorFailover
> > >
> > >
> > >
> > > Thanks for running the release Bill.
> > >
> > >
> > >
> > > Regards,
> > > --Vahid
> > >
> > >
> > >
> > > On Fri, Mar 6, 2020 at 9:20 AM Colin McCabe < cmccabe@ apache. org (
> > > cmcc...@apache.org ) > wrote:
> > >
> > >
> > >>
> > >>
> > >> +1 (binding)
> > >>
> > >>
> > >>
> > >> Checked the git hash and branch, looked at the docs a bit. Ran
> > quickstart
> > >> (although not the connect or streams parts). Looks good.
> > >>
> > >>
> > >>
> > >> best,
> > >> Colin
> > >>
> > >>
> > >>
> > >> On Fri, Mar 6, 2020, at 07:31, David Arthur wrote:
> > >>
> > >>
> > >>>
> > >>>
> > >>> +1 (binding)
> > >>>
> > >>>
> > >>>
> > >>> Download kafka_2.13-2.4.1 and verified signature, ran quickstart,
> > >>> everything looks good.
> > >>>
> > >>>
> > >>>
> > >>> Thanks for running this release, Bill!
> > >>>
> > >>>
> > >>>
> > >>> -David
> > >>>
> > >>>
> > >>>
> > >>> On Wed, Mar 4, 2020 at 6:06 AM Eno Thereska < eno. thereska@ gmail.
> > com (
> > >>> eno.there...@gmail.com ) >
> > >>>
> > >>>
> > >>
> > >>
> > >>
> > >> wrote:
> > >>
> > >>
> > >>>
> > 
> > 
> >  Hi Bill,
> > 
> > 
> > 
> >  I built from source and ran unit and integration tests. They passed.
> > There
> >  was a large number of skipped tests, but I'm assuming that is
> > intentional.
> > 
> > 
> > 
> > 
> >  Cheers
> >  Eno
> > 
> > 
> > 
> >  On Tue, Mar 3, 2020 at 8:42 PM Eric Lalonde < eric@ autonomic. ai (
> >  e...@autonomic.ai ) > wrote:
> > 
> > 
> > >
> > >
> > > Hi,
> > >
> > >
> > >
> > > I ran:
> > > $
> > >
> > >
> > 
> > 
> > >>>
> > >>>
> > >>
> > >>
> > >>
> > >> https:/ / github. com/ elalonde/ kafka/ blob/ master/ bin/
> > verify-kafka-rc.
> > >> sh (
> > https://github.com/elalonde/kafka/blob/master/bin/verify-kafka-rc.sh )
> > >>
> > >>
> > >>
> > >>>
> > 
> > 
> >  < https:/ / github. com/ elalonde/ kafka/ blob/ master/ bin/
> > verify-kafka-rc.
> >  sh (
> > https://github.com/elalonde/kafka/blob/master/bin/verify-kafka-rc.sh )
> >  >
> >  2.4.1 https:/ / home. apache. org/ ~bbejeck/ kafka-2. 4. 1-rc0 (
> >  https://home.apache.org/~bbejeck/kafka-2.4.1-rc0 ) < https:/ / home.
> > apache.
> >  org/ ~bbejeck/ kafka-2. 4. 1-rc0 (
> >  https://home.apache.org/~bbejeck/kafka-2.4.1-rc0 ) >
> > 
> > 
> > >
> > >
> > > All checksums and signatures are good and all unit and integration
> > >
> > >
> > 
> > 
> > >>>
> > >>>
> > >>
> > >>
> > >>
> > >> tests
> > >>
> > >>
> > >>>
> > 
> > 
> >  that were executed passed successfully.
> > 
> > 
> > >
> > >
> > > - Eric
> > >
> > >
> > >>
> > >>
> > >> On Mar 2, 2020, at 6:39 PM, Bill Bejeck < bbejeck@ gmail. com (
> > >> bbej...@gmail.com ) > wrote:
> > >>
> > >>
> > >>
> > >> Hello Kafka users, developers and client-developers,
> > >>
> > >>
> > >>
> > >> This is the first candidate for release of Apache Kafka 2.4.1.
> > >>
> > >>
> > >>
> > >> This is a bug fix release and it includes fixes and improvements
> > >>
> > >>
> > >
> > >
> > 
> > 
> > >>>
> > >>>
> > >>
> > >>
> > >>
> > >> from
> > >>
> > >>
> > >>>
> > 
> > 
> >  38
> > 
> > 
> > >
> > >>
> > >>
> > >> JIRAs, including a few critical

[jira] [Created] (KAFKA-9686) MockConsumer#endOffsets should be idempotent

2020-03-09 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-9686:
-

 Summary: MockConsumer#endOffsets should be idempotent
 Key: KAFKA-9686
 URL: https://issues.apache.org/jira/browse/KAFKA-9686
 Project: Kafka
  Issue Type: Bug
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


```scala
private Long getEndOffset(List offsets) {
if (offsets == null || offsets.isEmpty()) {
return null;
}
return offsets.size() > 1 ? offsets.remove(0) : offsets.get(0);
}
```

The above code has two issues.
1. It does not return the latest offset since the latest offset is at the end 
of offsets
1. It removes the element from offsets so MockConsumer#endOffsets gets 
non-idempotent




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Subject: [VOTE] 2.4.1 RC0

2020-03-09 Thread Tom Bentley
+1 (non-binding)

Built from source, all unit tests passed.

Thanks Bill.

On Mon, Mar 9, 2020 at 3:44 AM Gwen Shapira  wrote:

> +1 (binding)
>
> Verified signatures, built jars from source, quickstart passed and local
> unit tests all passed.
>
> Thank you for the release Bill!
>
> Gwen Shapira
> Engineering Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter | blog
>
> On Sat, Mar 07, 2020 at 8:15 PM, Vahid Hashemian <
> vahid.hashem...@gmail.com > wrote:
>
> >
> >
> >
> > +1 (binding)
> >
> >
> >
> > Verified signature, built from source, and ran quickstart successfully
> > (using openjdk version "11.0.6"). I also ran unit tests locally which
> > resulted in a few flaky tests for which there are already open Jiras:
> >
> >
> >
> > ReassignPartitionsClusterTest.shouldMoveSinglePartitionWithinBroker
> > ConsumerBounceTest.testCloseDuringRebalance
> >
> >
> >
> >
> ConsumerBounceTest.testConsumerReceivesFatalExceptionWhenGroupPassesMaxSize
> > PlaintextEndToEndAuthorizationTest.testNoConsumeWithDescribeAclViaAssign
> >
> >
> >
> >
> SaslClientsWithInvalidCredentialsTest.testManualAssignmentConsumerWithAuthenticationFailure
> > SaslMultiMechanismConsumerTest.testCoordinatorFailover
> >
> >
> >
> > Thanks for running the release Bill.
> >
> >
> >
> > Regards,
> > --Vahid
> >
> >
> >
> > On Fri, Mar 6, 2020 at 9:20 AM Colin McCabe < cmccabe@ apache. org (
> > cmcc...@apache.org ) > wrote:
> >
> >
> >>
> >>
> >> +1 (binding)
> >>
> >>
> >>
> >> Checked the git hash and branch, looked at the docs a bit. Ran
> quickstart
> >> (although not the connect or streams parts). Looks good.
> >>
> >>
> >>
> >> best,
> >> Colin
> >>
> >>
> >>
> >> On Fri, Mar 6, 2020, at 07:31, David Arthur wrote:
> >>
> >>
> >>>
> >>>
> >>> +1 (binding)
> >>>
> >>>
> >>>
> >>> Download kafka_2.13-2.4.1 and verified signature, ran quickstart,
> >>> everything looks good.
> >>>
> >>>
> >>>
> >>> Thanks for running this release, Bill!
> >>>
> >>>
> >>>
> >>> -David
> >>>
> >>>
> >>>
> >>> On Wed, Mar 4, 2020 at 6:06 AM Eno Thereska < eno. thereska@ gmail.
> com (
> >>> eno.there...@gmail.com ) >
> >>>
> >>>
> >>
> >>
> >>
> >> wrote:
> >>
> >>
> >>>
> 
> 
>  Hi Bill,
> 
> 
> 
>  I built from source and ran unit and integration tests. They passed.
> There
>  was a large number of skipped tests, but I'm assuming that is
> intentional.
> 
> 
> 
> 
>  Cheers
>  Eno
> 
> 
> 
>  On Tue, Mar 3, 2020 at 8:42 PM Eric Lalonde < eric@ autonomic. ai (
>  e...@autonomic.ai ) > wrote:
> 
> 
> >
> >
> > Hi,
> >
> >
> >
> > I ran:
> > $
> >
> >
> 
> 
> >>>
> >>>
> >>
> >>
> >>
> >> https:/ / github. com/ elalonde/ kafka/ blob/ master/ bin/
> verify-kafka-rc.
> >> sh (
> https://github.com/elalonde/kafka/blob/master/bin/verify-kafka-rc.sh )
> >>
> >>
> >>
> >>>
> 
> 
>  < https:/ / github. com/ elalonde/ kafka/ blob/ master/ bin/
> verify-kafka-rc.
>  sh (
> https://github.com/elalonde/kafka/blob/master/bin/verify-kafka-rc.sh )
>  >
>  2.4.1 https:/ / home. apache. org/ ~bbejeck/ kafka-2. 4. 1-rc0 (
>  https://home.apache.org/~bbejeck/kafka-2.4.1-rc0 ) < https:/ / home.
> apache.
>  org/ ~bbejeck/ kafka-2. 4. 1-rc0 (
>  https://home.apache.org/~bbejeck/kafka-2.4.1-rc0 ) >
> 
> 
> >
> >
> > All checksums and signatures are good and all unit and integration
> >
> >
> 
> 
> >>>
> >>>
> >>
> >>
> >>
> >> tests
> >>
> >>
> >>>
> 
> 
>  that were executed passed successfully.
> 
> 
> >
> >
> > - Eric
> >
> >
> >>
> >>
> >> On Mar 2, 2020, at 6:39 PM, Bill Bejeck < bbejeck@ gmail. com (
> >> bbej...@gmail.com ) > wrote:
> >>
> >>
> >>
> >> Hello Kafka users, developers and client-developers,
> >>
> >>
> >>
> >> This is the first candidate for release of Apache Kafka 2.4.1.
> >>
> >>
> >>
> >> This is a bug fix release and it includes fixes and improvements
> >>
> >>
> >
> >
> 
> 
> >>>
> >>>
> >>
> >>
> >>
> >> from
> >>
> >>
> >>>
> 
> 
>  38
> 
> 
> >
> >>
> >>
> >> JIRAs, including a few critical bugs.
> >>
> >>
> >>
> >> Release notes for the 2.4.1 release:
> >>
> >>
> >
> >
> 
> 
> >>>
> >>>
> >>
> >>
> >>
> >> https:/ / home. apache. org/ ~bbejeck/ kafka-2. 4. 1-rc0/
> RELEASE_NOTES. html
> >> ( https://home.apache.org/~bbejeck/kafka-2.4.1-rc0/RELEASE_NOTES.html )
> >>
> >>
> >>>
> 
> >
> >>
> >>
> >> *Please download, test and vote by Thursday, March 5, 9 am PT*
> >>
> >>
> >>
> >> Kafka's KEYS file containing PGP keys we use to sign the release:
> https:/ /
> >> kafka. apache. org/ KEYS ( https://kafka.apache.org/KEYS )
> >>
> >>
> >>
> >> * Release artifacts to be voted upon (source 

ConfigProvider notification

2020-03-09 Thread Tom Bentley
Colin asked that discussion about the future of the ConfigProvider
notification mechanism happen on the list (rather than in
https://issues.apache.org/jira/browse/KAFKA-9635 where I originally asked).

The background that I've pieced together is that the subscribe family of
methods were originally added in KIP-297 (which was focussed on adding
ConfigProviders for Kafka Connect), but subsequent discussion for KIP-421
(which was about enabling their use elsewhere–brokers, producers, consumers
etc) decided that using these methods to dynamically reconfigure e.g. a
broker was not desirable because there was no way to do so atomically if
multiple files were being changed.

I'm sure we all agree it would be nice to have a notification feature for
ConfigProviders, but the atomicity issue hasn't gone away. I suppose that
what's needed is a way to indicate that all the modifications to all the
config provider sources are now complete so that it is safe to start the
reconfiguration.

I'm happy to do some work on this and come up with a concrete proposal, if
no one is already working on it?

Cheers,

Tom