[jira] [Commented] (KAFKA-12635) Mirrormaker 2 offset sync is incorrect if the target partition is empty

2021-05-10 Thread Ning Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17342308#comment-17342308
 ] 

Ning Zhang commented on KAFKA-12635:


great, thanks for the feedback. I will proceed to finalize the pull request. 

> Mirrormaker 2 offset sync is incorrect if the target partition is empty
> ---
>
> Key: KAFKA-12635
> URL: https://issues.apache.org/jira/browse/KAFKA-12635
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 2.7.0
>Reporter: Frank Yi
>Assignee: Ning Zhang
>Priority: Major
>
> This bug occurs when using Mirrormaker with "sync.group.offsets.enabled = 
> true".
> If a source partition is empty, but the source consumer group's offset for 
> that partition is non-zero, then Mirrormaker sets the target consumer group's 
> offset for that partition to the literal, not translated, offset of the 
> source consumer group. This state can be reached if the source consumer group 
> consumed some records that were now deleted (like by a retention policy), or 
> if Mirrormaker replication is set to start at "latest". This bug causes the 
> target consumer group's lag for that partition to be negative and breaks 
> offset sync for that partition until lag is positive.
> The correct behavior when the source partition is empty would be to set the 
> target offset to the translated offset, not literal offset, which in this 
> case would always be 0. 
> Original email thread on this issue: 
> https://lists.apache.org/thread.html/r7c54ee5f57227367b911d4abffa72781772d8dd3b72d75eb65ee19f7%40%3Cusers.kafka.apache.org%3E



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12635) Mirrormaker 2 offset sync is incorrect if the target partition is empty

2021-05-10 Thread Frank Yi (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17342302#comment-17342302
 ] 

Frank Yi commented on KAFKA-12635:
--

fix works for me! Thanks

> Mirrormaker 2 offset sync is incorrect if the target partition is empty
> ---
>
> Key: KAFKA-12635
> URL: https://issues.apache.org/jira/browse/KAFKA-12635
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 2.7.0
>Reporter: Frank Yi
>Assignee: Ning Zhang
>Priority: Major
>
> This bug occurs when using Mirrormaker with "sync.group.offsets.enabled = 
> true".
> If a source partition is empty, but the source consumer group's offset for 
> that partition is non-zero, then Mirrormaker sets the target consumer group's 
> offset for that partition to the literal, not translated, offset of the 
> source consumer group. This state can be reached if the source consumer group 
> consumed some records that were now deleted (like by a retention policy), or 
> if Mirrormaker replication is set to start at "latest". This bug causes the 
> target consumer group's lag for that partition to be negative and breaks 
> offset sync for that partition until lag is positive.
> The correct behavior when the source partition is empty would be to set the 
> target offset to the translated offset, not literal offset, which in this 
> case would always be 0. 
> Original email thread on this issue: 
> https://lists.apache.org/thread.html/r7c54ee5f57227367b911d4abffa72781772d8dd3b72d75eb65ee19f7%40%3Cusers.kafka.apache.org%3E



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] ableegoldman opened a new pull request #10666: MINOR: prevent cleanup() from being called while Streams is still shutting down

2021-05-10 Thread GitBox


ableegoldman opened a new pull request #10666:
URL: https://github.com/apache/kafka/pull/10666


   Currently `KafkaStreams#cleanUp` only throw an IllegalStateException if the 
state is RUNNING or REBALANCING, however the application could be in the 
process of shutting down in which case StreamThreads may still be running. We 
should also throw if the state is PENDING_ERROR or PENDING_SHUTDOWN


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] ijuma commented on a change in pull request #10573: KAFKA-12574: KIP-732, Deprecate eos-alpha and replace eos-beta with eos-v2

2021-05-10 Thread GitBox


ijuma commented on a change in pull request #10573:
URL: https://github.com/apache/kafka/pull/10573#discussion_r629848073



##
File path: 
clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java
##
@@ -525,6 +525,11 @@ private TransactionManager 
configureTransactionState(ProducerConfig config,
 final int transactionTimeoutMs = 
config.getInt(ProducerConfig.TRANSACTION_TIMEOUT_CONFIG);
 final long retryBackoffMs = 
config.getLong(ProducerConfig.RETRY_BACKOFF_MS_CONFIG);
 final boolean autoDowngradeTxnCommit = 
config.getBoolean(ProducerConfig.AUTO_DOWNGRADE_TXN_COMMIT);
+// Only log a warning if being used outside of Streams, which we 
know includes "StreamThread-" in the client id
+if (autoDowngradeTxnCommit && !clientId.contains("StreamThread-")) 
{

Review comment:
   4.0 is probably 18 months away, that's a reasonably long time. Why do we 
need to log a warning only if it's not Streams?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] mjsax commented on a change in pull request #10573: KAFKA-12574: KIP-732, Deprecate eos-alpha and replace eos-beta with eos-v2

2021-05-10 Thread GitBox


mjsax commented on a change in pull request #10573:
URL: https://github.com/apache/kafka/pull/10573#discussion_r629845576



##
File path: 
clients/src/main/java/org/apache/kafka/clients/producer/MockProducer.java
##
@@ -179,10 +179,18 @@ public void beginTransaction() throws 
ProducerFencedException {
 this.sentOffsets = false;
 }
 
+@SuppressWarnings("deprecation")

Review comment:
   Would be better I guess. But doesn't it inherit the `@Deprecated` 
annotation automatically?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] mjsax commented on a change in pull request #10573: KAFKA-12574: KIP-732, Deprecate eos-alpha and replace eos-beta with eos-v2

2021-05-10 Thread GitBox


mjsax commented on a change in pull request #10573:
URL: https://github.com/apache/kafka/pull/10573#discussion_r629845081



##
File path: 
clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java
##
@@ -525,6 +525,11 @@ private TransactionManager 
configureTransactionState(ProducerConfig config,
 final int transactionTimeoutMs = 
config.getInt(ProducerConfig.TRANSACTION_TIMEOUT_CONFIG);
 final long retryBackoffMs = 
config.getLong(ProducerConfig.RETRY_BACKOFF_MS_CONFIG);
 final boolean autoDowngradeTxnCommit = 
config.getBoolean(ProducerConfig.AUTO_DOWNGRADE_TXN_COMMIT);
+// Only log a warning if being used outside of Streams, which we 
know includes "StreamThread-" in the client id
+if (autoDowngradeTxnCommit && !clientId.contains("StreamThread-")) 
{

Review comment:
   In general I agree, however given that we intend to remove it in 4.0 
(that should not be too long out), it seems acceptable? If you feel strong 
about it, any proposal how to avoid it?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (KAFKA-12718) SessionWindows are closed too early

2021-05-10 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reassigned KAFKA-12718:
---

Assignee: Juan C. Gonzalez-Zurita

> SessionWindows are closed too early
> ---
>
> Key: KAFKA-12718
> URL: https://issues.apache.org/jira/browse/KAFKA-12718
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Juan C. Gonzalez-Zurita
>Priority: Major
>  Labels: beginner, easy-fix, newbie
> Fix For: 3.0.0
>
>
> SessionWindows are defined based on a {{gap}} parameter, and also support an 
> additional {{grace-period}} configuration to handle out-of-order data.
> To incorporate the session-gap a session window should only be closed at 
> {{window-end + gap}} and to incorporate grace-period, the close time should 
> be pushed out further to {{window-end + gap + grace}}.
> However, atm we compute the window close time as {{window-end + grace}} 
> omitting the {{gap}} parameter.
> Because default grace-period is 24h most users might not notice this issues. 
> Even if they set a grace period explicitly (eg, when using suppress()), they 
> would most likely set a grace-period larger than gap-time not hitting the 
> issue (or maybe only realize it when inspecting the behavior closely).
> However, if a user wants to disable the grace-period and sets it to zero (on 
> any other value smaller than gap-time), sessions might be close too early and 
> user might notice.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12718) SessionWindows are closed too early

2021-05-10 Thread Matthias J. Sax (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17342293#comment-17342293
 ] 

Matthias J. Sax commented on KAFKA-12718:
-

[~gonzur] it seems [~byusti] lost interest to work on this ticket. Feel free to 
pick it up.

> SessionWindows are closed too early
> ---
>
> Key: KAFKA-12718
> URL: https://issues.apache.org/jira/browse/KAFKA-12718
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Matthias J. Sax
>Priority: Major
>  Labels: beginner, easy-fix, newbie
> Fix For: 3.0.0
>
>
> SessionWindows are defined based on a {{gap}} parameter, and also support an 
> additional {{grace-period}} configuration to handle out-of-order data.
> To incorporate the session-gap a session window should only be closed at 
> {{window-end + gap}} and to incorporate grace-period, the close time should 
> be pushed out further to {{window-end + gap + grace}}.
> However, atm we compute the window close time as {{window-end + grace}} 
> omitting the {{gap}} parameter.
> Because default grace-period is 24h most users might not notice this issues. 
> Even if they set a grace period explicitly (eg, when using suppress()), they 
> would most likely set a grace-period larger than gap-time not hitting the 
> issue (or maybe only realize it when inspecting the behavior closely).
> However, if a user wants to disable the grace-period and sets it to zero (on 
> any other value smaller than gap-time), sessions might be close too early and 
> user might notice.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12760) Delete My Account

2021-05-10 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-12760.
-
Resolution: Invalid

[~byusti] – we cannot delete your account either.

You could try to file a ticket against 
[https://issues.apache.org/jira/projects/INFRA] – maybe they are able to help 
with this request.

> Delete My Account
> -
>
> Key: KAFKA-12760
> URL: https://issues.apache.org/jira/browse/KAFKA-12760
> Project: Kafka
>  Issue Type: Wish
>Reporter: name
>Priority: Minor
>
> I wish to have my account deleted. There doesnt seem to be a way to do it 
> from within my own account, but it should be possible for an admin to do it.
> Many thanks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12771) CheckStyle attempted upgrade (8.36.2 -->> 8.41.1) summons a pack of 'Indentation' errors

2021-05-10 Thread Dongjin Lee (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17342288#comment-17342288
 ] 

Dongjin Lee commented on KAFKA-12771:
-

Hi [~dejan2609], This issue is addressed in KAFKA-12572. Could you have [a 
look|https://github.com/apache/kafka/pull/10428]? It seems like we can upgrade 
checkstyle version after it is merged.

> CheckStyle attempted upgrade (8.36.2 -->> 8.41.1) summons a pack of 
> 'Indentation' errors
> 
>
> Key: KAFKA-12771
> URL: https://issues.apache.org/jira/browse/KAFKA-12771
> Project: Kafka
>  Issue Type: Improvement
>  Components: build
>Reporter: Dejan Stojadinović
>Assignee: Dejan Stojadinović
>Priority: Minor
>
> ^*Prologue*: 
> [https://github.com/apache/kafka/pull/10656#issuecomment-836071563]^
> *Scenario:*
>  * bump CheckStyle to a more recent version (8.36.2 -->> 8.41.1)
>  * introduce (temporarily !) maxErrors CheckStyle property (in order to count 
> errors)
>  * execute gradle command: *_./gradlew checkstyleMain checkstyleTest_*
>  * around 50 of 'Indentation' CheckStyle errors (across 18 source code files) 
> are shown
> *Note:* there were some changes related to indentation between CheckStyle 
> *8.36.2* and *8.41.1*: 
> [https://checkstyle.sourceforge.io/releasenotes.html#Release_8.41.1]
> *What can be done (options):*
>  # relax CheckStyle 'Indentation' rules (if possible)
>  # comply with new CheckStyle 'Indentation' rules (and change/fix indentation 
> fir these source code files)
>  # there are some slim chances that this is a some kind of CheckStyle 
> regression (maybe similar to this one: 
> [https://github.com/checkstyle/checkstyle/issues/9341]). This should be 
> checked with CheckStyle team.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] showuon commented on pull request #10665: KAFKA-9009: increase replica.lag.time.max.ms to make the test reliable

2021-05-10 Thread GitBox


showuon commented on pull request #10665:
URL: https://github.com/apache/kafka/pull/10665#issuecomment-837734046


   @edoardocomar @mimaison , could you help review this PR to make the test 
reliable. Thank you.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] showuon opened a new pull request #10665: KAFKA-9009: increase replica.lag.time.max.ms to make the test reliable

2021-05-10 Thread GitBox


showuon opened a new pull request #10665:
URL: https://github.com/apache/kafka/pull/10665


   We used to set a low `replica.lag.time.max.ms` value (2 sec) to speed up the 
test, but the 2 sec is not long enough in slow Jenkins env, and caused the 
follower got kicked out from ISR, so the `UnderReplicatedPartitions` count will 
be added. Increase the `replica.lag.time.max.ms` value to make this test more 
reliable.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (KAFKA-12757) Move server related common and public classes into separate module(s).

2021-05-10 Thread Satish Duggana (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17342272#comment-17342272
 ] 

Satish Duggana commented on KAFKA-12757:


[~junrao] I am not aware of cases like common classes depending on server side 
APIs. But I wanted to call out about the dependency limitation so that others 
know about it and discuss any such cases if they have.

> Move server related common and public classes into separate module(s).
> --
>
> Key: KAFKA-12757
> URL: https://issues.apache.org/jira/browse/KAFKA-12757
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Satish Duggana
>Priority: Major
>
> There are two sets of classes that we want to pull out here for server 
> common/api classes.
> 1. All the common classes used by server modules.
>  2. All the public server side classes exposed to users. Some of these 
> classes are storage-api, security, quotas etc.
> Couple of approaches that we can consider here:
> 1. Both sets of classes will be added in server-common module but common 
> classes will have a package prefix as org.apache.kafka.server.common. But 
> public classes will continue to have the existing package names. We will 
> generate javadocs for these public classes.
>  Pros
>  - Users and server modules will be depdent on a single module for both 
> common and public apis.
>  - Both common classes and api classes can be dependent on each other.
> Cons
>  - Not a clean separation between common classes and public classes.
> 2. Common classes used by server modules will be added in server-common 
> module. We will create another module called server-api for server side 
> public classes.
> Pros
>  - It gives a neat separation between common and public classes.
>  - Maintaining the growth of these modules will be easy.
> Cons
>  - We can not have common and api classes to be dependent on each other which 
> will cause circular dependency.
>  
> Please feel free to modify/add other approaches in modularizing these 
> classes. 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] ableegoldman commented on pull request #10637: MINOR: remove storage/src/generated from tracked files

2021-05-10 Thread GitBox


ableegoldman commented on pull request #10637:
URL: https://github.com/apache/kafka/pull/10637#issuecomment-837691327


   Is it possible to set up the gitignore to automatically ignore anything 
under a `generated/` subdirectory?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (KAFKA-9009) Flaky Test kafka.integration.MetricsDuringTopicCreationDeletionTest.testMetricsDuringTopicCreateDelete

2021-05-10 Thread Luke Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luke Chen reassigned KAFKA-9009:


Assignee: Luke Chen

> Flaky Test 
> kafka.integration.MetricsDuringTopicCreationDeletionTest.testMetricsDuringTopicCreateDelete
> --
>
> Key: KAFKA-9009
> URL: https://issues.apache.org/jira/browse/KAFKA-9009
> Project: Kafka
>  Issue Type: Test
>  Components: core
>Affects Versions: 2.5.0, 2.6.0
>Reporter: Bill Bejeck
>Assignee: Luke Chen
>Priority: Major
>  Labels: flaky-test
>
> Failure seen in 
> [https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/25644/testReport/junit/kafka.integration/MetricsDuringTopicCreationDeletionTest/testMetricsDuringTopicCreateDelete/]
>  
> {noformat}
> Error Messagejava.lang.AssertionError: assertion failed: 
> UnderReplicatedPartitionCount not 0: 1Stacktracejava.lang.AssertionError: 
> assertion failed: UnderReplicatedPartitionCount not 0: 1
>   at scala.Predef$.assert(Predef.scala:170)
>   at 
> kafka.integration.MetricsDuringTopicCreationDeletionTest.testMetricsDuringTopicCreateDelete(MetricsDuringTopicCreationDeletionTest.scala:123)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:365)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>   at org.junit.runners.ParentRunner$4.run(ParentRunner.java:330)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:78)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:328)
>   at org.junit.runners.ParentRunner.access$100(ParentRunner.java:65)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:292)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:412)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
>   at 
> org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> 

[jira] [Commented] (KAFKA-9009) Flaky Test kafka.integration.MetricsDuringTopicCreationDeletionTest.testMetricsDuringTopicCreateDelete

2021-05-10 Thread Luke Chen (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17342261#comment-17342261
 ] 

Luke Chen commented on KAFKA-9009:
--

investigating

> Flaky Test 
> kafka.integration.MetricsDuringTopicCreationDeletionTest.testMetricsDuringTopicCreateDelete
> --
>
> Key: KAFKA-9009
> URL: https://issues.apache.org/jira/browse/KAFKA-9009
> Project: Kafka
>  Issue Type: Test
>  Components: core
>Affects Versions: 2.5.0, 2.6.0
>Reporter: Bill Bejeck
>Assignee: Luke Chen
>Priority: Major
>  Labels: flaky-test
>
> Failure seen in 
> [https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/25644/testReport/junit/kafka.integration/MetricsDuringTopicCreationDeletionTest/testMetricsDuringTopicCreateDelete/]
>  
> {noformat}
> Error Messagejava.lang.AssertionError: assertion failed: 
> UnderReplicatedPartitionCount not 0: 1Stacktracejava.lang.AssertionError: 
> assertion failed: UnderReplicatedPartitionCount not 0: 1
>   at scala.Predef$.assert(Predef.scala:170)
>   at 
> kafka.integration.MetricsDuringTopicCreationDeletionTest.testMetricsDuringTopicCreateDelete(MetricsDuringTopicCreationDeletionTest.scala:123)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:365)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>   at org.junit.runners.ParentRunner$4.run(ParentRunner.java:330)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:78)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:328)
>   at org.junit.runners.ParentRunner.access$100(ParentRunner.java:65)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:292)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:412)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
>   at 
> org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> 

[GitHub] [kafka] ableegoldman commented on pull request #10664: KAFKA-12749: Changelog topic config on suppressed KTable lost

2021-05-10 Thread GitBox


ableegoldman commented on pull request #10664:
URL: https://github.com/apache/kafka/pull/10664#issuecomment-837607688


   cc any of @cadonna @vvcephei @lct45 @wcarlson5 @mjsax @guozhangwang to 
review this


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] ableegoldman commented on a change in pull request #6592: KAFKA-8326: Introduce List Serde

2021-05-10 Thread GitBox


ableegoldman commented on a change in pull request #6592:
URL: https://github.com/apache/kafka/pull/6592#discussion_r629768563



##
File path: 
clients/src/main/java/org/apache/kafka/common/serialization/Serdes.java
##
@@ -265,4 +287,14 @@ public UUIDSerde() {
 static public Serde Void() {
 return new VoidSerde();
 }
+
+/*
+ * A serde for {@code List} type
+ */
+static public , Inner> Serde>

Review comment:
   Same here, --> `public static`. Can you also leave it on one line? I 
know it's super long, but that's just the style we use in Kafka

##
File path: 
clients/src/main/java/org/apache/kafka/common/serialization/ListDeserializer.java
##
@@ -0,0 +1,197 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.common.serialization;
+
+import org.apache.kafka.clients.CommonClientConfigs;
+import org.apache.kafka.common.KafkaException;
+import org.apache.kafka.common.config.ConfigException;
+import org.apache.kafka.common.errors.SerializationException;
+import org.apache.kafka.common.utils.Utils;
+
+import java.io.ByteArrayInputStream;
+import java.io.DataInputStream;
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+import static 
org.apache.kafka.common.serialization.Serdes.ListSerde.SerializationStrategy;
+import static org.apache.kafka.common.utils.Utils.mkEntry;
+import static org.apache.kafka.common.utils.Utils.mkMap;
+
+public class ListDeserializer implements Deserializer> {
+
+private static final Map>, Integer> 
FIXED_LENGTH_DESERIALIZERS = mkMap(
+mkEntry(ShortDeserializer.class, Short.BYTES),
+mkEntry(IntegerDeserializer.class, Integer.BYTES),
+mkEntry(FloatDeserializer.class, Float.BYTES),
+mkEntry(LongDeserializer.class, Long.BYTES),
+mkEntry(DoubleDeserializer.class, Double.BYTES),
+mkEntry(UUIDDeserializer.class, 36)
+);
+
+private Deserializer inner;
+private Class listClass;
+private Integer primitiveSize;
+
+public ListDeserializer() {}
+
+public > ListDeserializer(Class listClass, 
Deserializer inner) {
+if (listClass == null || inner == null) {
+throw new IllegalArgumentException("ListDeserializer requires both 
\"listClass\" and \"innerDeserializer\" parameters to be provided during 
initialization");
+}
+this.listClass = listClass;
+this.inner = inner;
+this.primitiveSize = FIXED_LENGTH_DESERIALIZERS.get(inner.getClass());
+}
+
+public Deserializer getInnerDeserializer() {

Review comment:
   nit: Kafka coding style doesn't use the `get` prefix in getters, ie this 
should be named `innerDeserializer` (same applies for any other getters in this 
PR, I won't bug you by commenting on every single one of them)

##
File path: 
clients/src/main/java/org/apache/kafka/common/serialization/ListSerializer.java
##
@@ -0,0 +1,138 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.common.serialization;
+
+import org.apache.kafka.clients.CommonClientConfigs;
+import org.apache.kafka.common.KafkaException;
+import org.apache.kafka.common.config.ConfigException;
+import org.apache.kafka.common.utils.Utils;
+
+import java.io.ByteArrayOutputStream;
+import java.io.DataOutputStream;

[GitHub] [kafka] vichu commented on pull request #10664: KAFKA-12749: Changelog topic config on suppressed KTable lost

2021-05-10 Thread GitBox


vichu commented on pull request #10664:
URL: https://github.com/apache/kafka/pull/10664#issuecomment-837597679


   @ableegoldman Would appreciate it if you can take a look at this PR when you 
get a chance. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] vichu opened a new pull request #10664: KAFKA-12749: Changelog topic config on suppressed KTable lost

2021-05-10 Thread GitBox


vichu opened a new pull request #10664:
URL: https://github.com/apache/kafka/pull/10664


   Refactored `logConfig` to be passed appropriately when using 
`shutDownWhenFull` or `emitEarlyWhenFull`. Removed the constructor that doesn't 
accept a `logConfig` parameter so you're forced to specify it explicitly, 
whether it's empty/unspecified or not.
   
   Ticket: https://issues.apache.org/jira/browse/KAFKA-12749
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] vitojeng commented on pull request #10657: KAFKA-5876: Apply InvalidStateStorePartitionException for Interactive Queries

2021-05-10 Thread GitBox


vitojeng commented on pull request #10657:
URL: https://github.com/apache/kafka/pull/10657#issuecomment-837571301


   @ableegoldman Thanks. :)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] ijuma commented on pull request #10606: KAFKA-12728: version upgrades: gradle (6.8.3 -->> 7.0.1) and gradle shadow plugin (6.1.0 -->> 7.0.0)

2021-05-10 Thread GitBox


ijuma commented on pull request #10606:
URL: https://github.com/apache/kafka/pull/10606#issuecomment-837551412


   Looks like there is a change in behavior in Gradle 7 related to resource 
files that's causing a bunch of tests to fail 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] ableegoldman commented on pull request #10657: KAFKA-5876: Apply InvalidStateStorePartitionException for Interactive Queries

2021-05-10 Thread GitBox


ableegoldman commented on pull request #10657:
URL: https://github.com/apache/kafka/pull/10657#issuecomment-837549700


   Merged to trunk


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] ableegoldman merged pull request #10657: KAFKA-5876: Apply InvalidStateStorePartitionException for Interactive Queries

2021-05-10 Thread GitBox


ableegoldman merged pull request #10657:
URL: https://github.com/apache/kafka/pull/10657


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] ijuma commented on pull request #10306: MINOR: socket setup max should be 30 seconds

2021-05-10 Thread GitBox


ijuma commented on pull request #10306:
URL: https://github.com/apache/kafka/pull/10306#issuecomment-837508654


   @cmccabe We should update the KIP and share a note in the mailing list 
thread. Also, was this released with `127` as the default or did we change it 
before the first release? If the former, we should add a release note, right?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] mdedetrich edited a comment on pull request #10648: KAFKA-9726: Add IdentityReplicationPolicy for MM2

2021-05-10 Thread GitBox


mdedetrich edited a comment on pull request #10648:
URL: https://github.com/apache/kafka/pull/10648#issuecomment-837469565


   KIP has been created at 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-737%3A+Add+canTrackSource+to+ReplicationPolicy
 and a new thread has been started in the kafka-dev mailing list for this topic 
(let me know if anything more needs to be done).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] mdedetrich commented on pull request #10648: KAFKA-9726: Add IdentityReplicationPolicy for MM2

2021-05-10 Thread GitBox


mdedetrich commented on pull request #10648:
URL: https://github.com/apache/kafka/pull/10648#issuecomment-837469565


   KIP has been created at 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-737%3A+Add+canTrackSource+to+ReplicationPolicy
 and a new thread has been started in the apache-dev mailing list for this 
topic (let me know if anything more needs to be done).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] dejan2609 commented on pull request #10656: MINOR: checkstyle version upgrade: 8.20 -->> 8.36.2

2021-05-10 Thread GitBox


dejan2609 commented on pull request #10656:
URL: https://github.com/apache/kafka/pull/10656#issuecomment-837469070


   @romani, @ijuma 
   
   > Do you want to contribute the change to allow the checkstyle version to be 
specified via a parameter?
   
   Sure, I created this JIRA ticket for my self (PR will follow in next few 
days, I hope): https://issues.apache.org/jira/browse/KAFKA-12770 **_Jenkins 
build: allow the CheckStyle version to be specified via parameter_**
   
   > Can you provide a bit more detail on the stricter rules?
   
   Separate ticket is filed here: 
https://issues.apache.org/jira/browse/KAFKA-12771 **_CheckStyle attempted 
upgrade (8.36.2 -->> 8.41.1) summons a pack of 'Indentation' errors_** 
   I will provide some more details for discussion soon.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (KAFKA-12771) CheckStyle attempted upgrade (8.36.2 -->> 8.41.1) summons a pack of 'Indentation' errors

2021-05-10 Thread Jira


 [ 
https://issues.apache.org/jira/browse/KAFKA-12771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dejan Stojadinović updated KAFKA-12771:
---
Description: 
^*Prologue*: 
[https://github.com/apache/kafka/pull/10656#issuecomment-836071563]^

*Scenario:*
 * bump CheckStyle to a more recent version (8.36.2 -->> 8.41.1)
 * introduce (temporarily !) maxErrors CheckStyle property (in order to count 
errors)
 * execute gradle command: *_./gradlew checkstyleMain checkstyleTest_*
 * around 50 of 'Indentation' CheckStyle errors (across 18 source code files) 
are shown

*Note:* there were some changes related to indentation between CheckStyle 
*8.36.2* and *8.41.1*: 
[https://checkstyle.sourceforge.io/releasenotes.html#Release_8.41.1]

*What can be done (options):*
 # relax CheckStyle 'Indentation' rules (if possible)
 # comply with new CheckStyle 'Indentation' rules (and change/fix indentation 
fir these source code files)
 # there are some slim chances that this is a some kind of CheckStyle 
regression (maybe similar to this one: 
[https://github.com/checkstyle/checkstyle/issues/9341]). This should be checked 
with CheckStyle team.

  was:
^*Prologue*: 
[https://github.com/apache/kafka/pull/10656#issuecomment-836071563]^

*Scenario:*
 * bump CheckStyle to a more recent version (8.36.2 -->> 8.41.1)
 * introduce (temporarily !) maxErrors CheckStyle property (in order to count 
errors)
 * execute gradle command: *_./gradlew checkstyleMain checkstyleTest_*
 * around 50 of 'Indentation' CheckStyle errors (across 18 source code files) 
are shown

*Note:* there were some changes related to indentation between CheckStyle 
*8.36.2* and * 8.41.1*: 
[https://checkstyle.sourceforge.io/releasenotes.html#Release_8.41.1]

*What can be done (options):*
 # relax CheckStyle 'Indentation' rules (if possible)
 # comply with new CheckStyle 'Indentation' rules (and change/fix indentation 
fir these source code files)
 # there are some slim chances that this is a some kind of CheckStyle 
regression (maybe similar to this one: 
[https://github.com/checkstyle/checkstyle/issues/9341]). This should be checked 
with CheckStyle team.


> CheckStyle attempted upgrade (8.36.2 -->> 8.41.1) summons a pack of 
> 'Indentation' errors
> 
>
> Key: KAFKA-12771
> URL: https://issues.apache.org/jira/browse/KAFKA-12771
> Project: Kafka
>  Issue Type: Improvement
>  Components: build
>Reporter: Dejan Stojadinović
>Assignee: Dejan Stojadinović
>Priority: Minor
>
> ^*Prologue*: 
> [https://github.com/apache/kafka/pull/10656#issuecomment-836071563]^
> *Scenario:*
>  * bump CheckStyle to a more recent version (8.36.2 -->> 8.41.1)
>  * introduce (temporarily !) maxErrors CheckStyle property (in order to count 
> errors)
>  * execute gradle command: *_./gradlew checkstyleMain checkstyleTest_*
>  * around 50 of 'Indentation' CheckStyle errors (across 18 source code files) 
> are shown
> *Note:* there were some changes related to indentation between CheckStyle 
> *8.36.2* and *8.41.1*: 
> [https://checkstyle.sourceforge.io/releasenotes.html#Release_8.41.1]
> *What can be done (options):*
>  # relax CheckStyle 'Indentation' rules (if possible)
>  # comply with new CheckStyle 'Indentation' rules (and change/fix indentation 
> fir these source code files)
>  # there are some slim chances that this is a some kind of CheckStyle 
> regression (maybe similar to this one: 
> [https://github.com/checkstyle/checkstyle/issues/9341]). This should be 
> checked with CheckStyle team.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12771) CheckStyle attempted upgrade (8.36.2 -->> 8.41.1) summons a pack of 'Indentation' errors

2021-05-10 Thread Jira
Dejan Stojadinović created KAFKA-12771:
--

 Summary: CheckStyle attempted upgrade (8.36.2 -->> 8.41.1) summons 
a pack of 'Indentation' errors
 Key: KAFKA-12771
 URL: https://issues.apache.org/jira/browse/KAFKA-12771
 Project: Kafka
  Issue Type: Improvement
  Components: build
Reporter: Dejan Stojadinović
Assignee: Dejan Stojadinović


^*Prologue*: 
[https://github.com/apache/kafka/pull/10656#issuecomment-836071563]^

*Scenario:*
 * bump CheckStyle to a more recent version (8.36.2 -->> 8.41.1)
 * introduce (temporarily !) maxErrors CheckStyle property (in order to count 
errors)
 * execute gradle command: *_./gradlew checkstyleMain checkstyleTest_*
 * around 50 of 'Indentation' CheckStyle errors (across 18 source code files) 
are shown

*Note:* there were some changes related to indentation between CheckStyle 
*8.36.2* and * 8.41.1*: 
[https://checkstyle.sourceforge.io/releasenotes.html#Release_8.41.1]

*What can be done (options):*
 # relax CheckStyle 'Indentation' rules (if possible)
 # comply with new CheckStyle 'Indentation' rules (and change/fix indentation 
fir these source code files)
 # there are some slim chances that this is a some kind of CheckStyle 
regression (maybe similar to this one: 
[https://github.com/checkstyle/checkstyle/issues/9341]). This should be checked 
with CheckStyle team.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12770) Jenkins build: allow the CheckStyle version to be specified via parameter

2021-05-10 Thread Jira
Dejan Stojadinović created KAFKA-12770:
--

 Summary: Jenkins build: allow the CheckStyle version to be 
specified via parameter
 Key: KAFKA-12770
 URL: https://issues.apache.org/jira/browse/KAFKA-12770
 Project: Kafka
  Issue Type: Improvement
  Components: build
Reporter: Dejan Stojadinović
Assignee: Dejan Stojadinović


  ^*(i) Prologue*: 
[https://github.com/apache/kafka/pull/10656#issuecomment-836074067]^

*(on) Rationale:* if we implement this CheckStyle team ([~romani] and others) 
can add Kafka project to their regression suite: 
[https://github.com/apache/kafka/pull/10656#issuecomment-835809154] 

*Related links:*
 * [https://github.com/apache/kafka/blob/2.8.0/Jenkinsfile#L28]
 * [https://github.com/apache/kafka#common-build-options]
 * 
[https://docs.gradle.org/7.0.1/dsl/org.gradle.api.plugins.quality.CheckstyleExtension.html#org.gradle.api.plugins.quality.CheckstyleExtension:toolVersion]
  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-12728) Version upgrades: gradle (6.8.3 -->> 7.0.1) and gradle shadow plugin (6.1.0 -->> 7.0.0)

2021-05-10 Thread Jira


 [ 
https://issues.apache.org/jira/browse/KAFKA-12728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dejan Stojadinović updated KAFKA-12728:
---
Summary: Version upgrades: gradle (6.8.3 -->> 7.0.1) and gradle shadow 
plugin (6.1.0 -->> 7.0.0)  (was: Update Gradle version: 6.8 -->> 7.0)

> Version upgrades: gradle (6.8.3 -->> 7.0.1) and gradle shadow plugin (6.1.0 
> -->> 7.0.0)
> ---
>
> Key: KAFKA-12728
> URL: https://issues.apache.org/jira/browse/KAFKA-12728
> Project: Kafka
>  Issue Type: Improvement
>  Components: build
>Reporter: Dejan Stojadinović
>Assignee: Dejan Stojadinović
>Priority: Major
>
> ^_*Prologue/related tickets*:_ KAFKA-12415 _*and*_ KAFKA-12417^
> *Gradle 7.0 release notes:* [https://docs.gradle.org/7.0/release-notes.html]
> *_+Note+_*: it makes sense to wait for a patch *_7.0.1_* to be released: 
> [https://github.com/gradle/gradle/milestone/173]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] ijuma commented on a change in pull request #10620: KAFKA-12736: KafkaProducer.flush holds onto completed ProducerBatch(s) until flush completes

2021-05-10 Thread GitBox


ijuma commented on a change in pull request #10620:
URL: https://github.com/apache/kafka/pull/10620#discussion_r629703346



##
File path: 
clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java
##
@@ -710,8 +710,11 @@ private boolean appendsInProgress() {
  */
 public void awaitFlushCompletion() throws InterruptedException {
 try {
-for (ProducerBatch batch : this.incomplete.copyAll())
-batch.produceFuture.await();
+// Obtain a copy of all of the incomplete ProduceRequestResult(s) 
the time of the flush.
+// We must be careful not to hold a reference to the 
ProduceBatch(s) so that garbage
+// collection can occur on the contents.

Review comment:
   I think I'd mention this bit from your message: `the sender will remove 
the producer batches from the original incomplete collection`. This explains 
why we should not hold to any batches.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (KAFKA-12768) Mirrormaker2 consumer config not using newly assigned client id

2021-05-10 Thread Vincent (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent updated KAFKA-12768:

Attachment: (was: mirroramker2.properties)

> Mirrormaker2 consumer config not using newly assigned client id
> ---
>
> Key: KAFKA-12768
> URL: https://issues.apache.org/jira/browse/KAFKA-12768
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 2.6.0
>Reporter: Vincent
>Priority: Major
>
> Component: MirrorMaker2 from the 2.6.0 distribution.
> We tried to set quotas based client.id in mirrormaker2. We tried the setting 
> source.consumer.client.id and source.client.id properties with no luck.
> I was able to update the consumer client id using the 
> US->EUROPE.consumer.client.id config (from the customer) with a single 
> instance of MM2. With a single instance, everything works fine without any 
> issue. However, we are running 2 instances of MirrorMaker 2 with tasks.max 
> set to 2 and it doesn't work with multiple MM2 processes. We also tried 
> stopping all mirromaker2 instances and starting them again but it didn't help.
> Currently, the only workaround is to recreate (or rename) the Connect 
> internal topics.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-12768) Mirrormaker2 consumer config not using newly assigned client id

2021-05-10 Thread Vincent (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent updated KAFKA-12768:

Attachment: (was: mirrormaker.log)

> Mirrormaker2 consumer config not using newly assigned client id
> ---
>
> Key: KAFKA-12768
> URL: https://issues.apache.org/jira/browse/KAFKA-12768
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 2.6.0
>Reporter: Vincent
>Priority: Major
>
> Component: MirrorMaker2 from the 2.6.0 distribution.
> We tried to set quotas based client.id in mirrormaker2. We tried the setting 
> source.consumer.client.id and source.client.id properties with no luck.
> I was able to update the consumer client id using the 
> US->EUROPE.consumer.client.id config (from the customer) with a single 
> instance of MM2. With a single instance, everything works fine without any 
> issue. However, we are running 2 instances of MirrorMaker 2 with tasks.max 
> set to 2 and it doesn't work with multiple MM2 processes. We also tried 
> stopping all mirromaker2 instances and starting them again but it didn't help.
> Currently, the only workaround is to recreate (or rename) the Connect 
> internal topics.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-12768) Mirrormaker2 consumer config not using newly assigned client id

2021-05-10 Thread Vincent (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent updated KAFKA-12768:

Attachment: (was: instance2_log)

> Mirrormaker2 consumer config not using newly assigned client id
> ---
>
> Key: KAFKA-12768
> URL: https://issues.apache.org/jira/browse/KAFKA-12768
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 2.6.0
>Reporter: Vincent
>Priority: Major
>
> Component: MirrorMaker2 from the 2.6.0 distribution.
> We tried to set quotas based client.id in mirrormaker2. We tried the setting 
> source.consumer.client.id and source.client.id properties with no luck.
> I was able to update the consumer client id using the 
> US->EUROPE.consumer.client.id config (from the customer) with a single 
> instance of MM2. With a single instance, everything works fine without any 
> issue. However, we are running 2 instances of MirrorMaker 2 with tasks.max 
> set to 2 and it doesn't work with multiple MM2 processes. We also tried 
> stopping all mirromaker2 instances and starting them again but it didn't help.
> Currently, the only workaround is to recreate (or rename) the Connect 
> internal topics.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-12768) Mirrormaker2 consumer config not using newly assigned client id

2021-05-10 Thread Vincent (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent updated KAFKA-12768:

Attachment: (was: instance1_log)

> Mirrormaker2 consumer config not using newly assigned client id
> ---
>
> Key: KAFKA-12768
> URL: https://issues.apache.org/jira/browse/KAFKA-12768
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 2.6.0
>Reporter: Vincent
>Priority: Major
>
> Component: MirrorMaker2 from the 2.6.0 distribution.
> We tried to set quotas based client.id in mirrormaker2. We tried the setting 
> source.consumer.client.id and source.client.id properties with no luck.
> I was able to update the consumer client id using the 
> US->EUROPE.consumer.client.id config (from the customer) with a single 
> instance of MM2. With a single instance, everything works fine without any 
> issue. However, we are running 2 instances of MirrorMaker 2 with tasks.max 
> set to 2 and it doesn't work with multiple MM2 processes. We also tried 
> stopping all mirromaker2 instances and starting them again but it didn't help.
> Currently, the only workaround is to recreate (or rename) the Connect 
> internal topics.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12769) Backport of KAFKA-8562

2021-05-10 Thread Josep Prat (Jira)
Josep Prat created KAFKA-12769:
--

 Summary: Backport of KAFKA-8562
 Key: KAFKA-12769
 URL: https://issues.apache.org/jira/browse/KAFKA-12769
 Project: Kafka
  Issue Type: Task
  Components: network
Reporter: Josep Prat
Assignee: Josep Prat


Kafka-8562 solved the issue of SASL performing a reverse DNS lookup to resolve 
the IP.

This bug fix should be backported so it's present on 2.7.x and 2.8.x versions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12769) Backport of KAFKA-8562

2021-05-10 Thread Josep Prat (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17342144#comment-17342144
 ] 

Josep Prat commented on KAFKA-12769:


I'll work on this

> Backport of KAFKA-8562
> --
>
> Key: KAFKA-12769
> URL: https://issues.apache.org/jira/browse/KAFKA-12769
> Project: Kafka
>  Issue Type: Task
>  Components: network
>Reporter: Josep Prat
>Assignee: Josep Prat
>Priority: Major
>
> Kafka-8562 solved the issue of SASL performing a reverse DNS lookup to 
> resolve the IP.
> This bug fix should be backported so it's present on 2.7.x and 2.8.x versions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-12768) Mirrormaker2 consumer config not using newly assigned client id

2021-05-10 Thread Vincent (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent updated KAFKA-12768:

Description: 
Component: MirrorMaker2 from the 2.6.0 distribution.

We tried to set quotas based client.id in mirrormaker2. We tried the setting 
source.consumer.client.id and source.client.id properties with no luck.

I was able to update the consumer client id using the 
US->EUROPE.consumer.client.id config (from the customer) with a single instance 
of MM2. With a single instance, everything works fine without any issue. 
However, we are running 2 instances of MirrorMaker 2 with tasks.max set to 2 
and it doesn't work with multiple MM2 processes. We also tried stopping all 
mirromaker2 instances and starting them again but it didn't help.

Currently, the only workaround is to recreate (or rename) the Connect internal 
topics.

  was:
Component: MirrorMaker2 from the 2.6.0 distribution.

The customer tried to set quotas based client.id in mirrormaker2. He tried the 
setting source.consumer.client.id and source.client.id properties with no luck.

I was able to update the consumer client id using the 
US->EUROPE.consumer.client.id config (from the customer) with a single instance 
of MM2. The customer also indicated that, with a single instance, everything 
works fine without any issue. However, we are running 2 instances of 
MirrorMaker 2 with tasks.max set to 2. He also tried stopping all mirromaker2 
instances and starting them again but it didn't help.

Currently, the only workaround is to recreate (or rename) the Connect internal 
topics.


> Mirrormaker2 consumer config not using newly assigned client id
> ---
>
> Key: KAFKA-12768
> URL: https://issues.apache.org/jira/browse/KAFKA-12768
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 2.6.0
>Reporter: Vincent
>Priority: Major
> Attachments: instance1_log, instance2_log, mirroramker2.properties, 
> mirrormaker.log
>
>
> Component: MirrorMaker2 from the 2.6.0 distribution.
> We tried to set quotas based client.id in mirrormaker2. We tried the setting 
> source.consumer.client.id and source.client.id properties with no luck.
> I was able to update the consumer client id using the 
> US->EUROPE.consumer.client.id config (from the customer) with a single 
> instance of MM2. With a single instance, everything works fine without any 
> issue. However, we are running 2 instances of MirrorMaker 2 with tasks.max 
> set to 2 and it doesn't work with multiple MM2 processes. We also tried 
> stopping all mirromaker2 instances and starting them again but it didn't help.
> Currently, the only workaround is to recreate (or rename) the Connect 
> internal topics.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12768) Mirrormaker2 consumer config not using newly assigned client id

2021-05-10 Thread Vincent (Jira)
Vincent created KAFKA-12768:
---

 Summary: Mirrormaker2 consumer config not using newly assigned 
client id
 Key: KAFKA-12768
 URL: https://issues.apache.org/jira/browse/KAFKA-12768
 Project: Kafka
  Issue Type: Bug
  Components: mirrormaker
Affects Versions: 2.6.0
Reporter: Vincent
 Attachments: instance1_log, instance2_log, mirroramker2.properties, 
mirrormaker.log

Component: MirrorMaker2 from the 2.6.0 distribution.

The customer tried to set quotas based client.id in mirrormaker2. He tried the 
setting source.consumer.client.id and source.client.id properties with no luck.

I was able to update the consumer client id using the 
US->EUROPE.consumer.client.id config (from the customer) with a single instance 
of MM2. The customer also indicated that, with a single instance, everything 
works fine without any issue. However, we are running 2 instances of 
MirrorMaker 2 with tasks.max set to 2. He also tried stopping all mirromaker2 
instances and starting them again but it didn't help.

Currently, the only workaround is to recreate (or rename) the Connect internal 
topics.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-12766) Consider Disabling WAL-related Options in RocksDB

2021-05-10 Thread A. Sophie Blee-Goldman (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

A. Sophie Blee-Goldman updated KAFKA-12766:
---
Fix Version/s: 3.0.0

> Consider Disabling WAL-related Options in RocksDB
> -
>
> Key: KAFKA-12766
> URL: https://issues.apache.org/jira/browse/KAFKA-12766
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Bruno Cadonna
>Priority: Minor
> Fix For: 3.0.0
>
>
> Streams disables the write-ahead log (WAL) provided by RocksDB since it 
> replicates the data in changelog topics. Hence, it does not make much sense 
> to set WAL-related configs for RocksDB instances within Streams.
> Streams could:
> - disable WAL-related options
> - ignore WAL-related options
> - throw an exception when a WAL-related option is set.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12766) Consider Disabling WAL-related Options in RocksDB

2021-05-10 Thread A. Sophie Blee-Goldman (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17342123#comment-17342123
 ] 

A. Sophie Blee-Goldman commented on KAFKA-12766:


Personally I agree that throwing an exception may be too harsh, it's not my 
impression that any of the wal-related options are "critical" enough that a 
user would want or need to be informed if they were going to be ignored. We 
should definitely log a warning at the least.

 

Can you clarify a bit more on the difference between the first two approaches, 
ie disable vs ignore? AFAICT they are pretty much the same -- since all user 
options pass through our adapter class, if we just don't pass through to the 
method on the underlying Option then we've both ignored and disabled it.
{quote}I do not understand how we should disable the option since we do not 
have control over the `Options` object
{quote}
Based on this sentence I'm wondering if you had something else in mind by 
"disable"? I feel that "ignoring" it may be sufficient, for the reasons 
described above, plus it's simple to implement

> Consider Disabling WAL-related Options in RocksDB
> -
>
> Key: KAFKA-12766
> URL: https://issues.apache.org/jira/browse/KAFKA-12766
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Bruno Cadonna
>Priority: Minor
>
> Streams disables the write-ahead log (WAL) provided by RocksDB since it 
> replicates the data in changelog topics. Hence, it does not make much sense 
> to set WAL-related configs for RocksDB instances within Streams.
> Streams could:
> - disable WAL-related options
> - ignore WAL-related options
> - throw an exception when a WAL-related option is set.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-12747) Flaky Test RocksDBStoreTest.shouldReturnUUIDsWithStringPrefix

2021-05-10 Thread Guozhang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-12747:
--
Fix Version/s: 2.8.1

> Flaky Test RocksDBStoreTest.shouldReturnUUIDsWithStringPrefix
> -
>
> Key: KAFKA-12747
> URL: https://issues.apache.org/jira/browse/KAFKA-12747
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: A. Sophie Blee-Goldman
>Assignee: Guozhang Wang
>Priority: Major
>  Labels: flaky-test, newbie, newbie++, unit-test
> Fix For: 3.0.0, 2.8.1
>
>
> Stacktrace
> java.lang.AssertionError: 
> Expected: is <1>
>  but: was <2>
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:6)
>   at 
> org.apache.kafka.streams.state.internals.RocksDBStoreTest.shouldReturnUUIDsWithStringPrefix(RocksDBStoreTest.java:463)
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-10597/10/tests/



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12749) Changelog topic config on suppressed KTable lost

2021-05-10 Thread A. Sophie Blee-Goldman (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17342120#comment-17342120
 ] 

A. Sophie Blee-Goldman commented on KAFKA-12749:


Thanks [~vishranganathan], assigned the ticket to you. I also added you as a 
contributor so you can self-assign from now on

> Changelog topic config on suppressed KTable lost
> 
>
> Key: KAFKA-12749
> URL: https://issues.apache.org/jira/browse/KAFKA-12749
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.6.0, 2.7.0, 2.8.0
>Reporter: Philip Bourke
>Assignee: Viswanathan Ranganathan
>Priority: Minor
>  Labels: newbie, newbie++
> Fix For: 3.0.0, 2.8.1
>
>
> When trying to set the changelog configuration on a suppressed KTable, the 
> config is lost if either {{emitEarlyWhenFull}} or {{shutDownWhenFull}} is set 
> after the logging config.
> This works - 
> {code:java}
> .suppress(Suppressed.untilTimeLimit(Duration.ofMillis(maxIdleIntervalMs), 
> BufferConfig.maxRecords(
>  
> maxBufferRecords).emitEarlyWhenFull().withLoggingEnabled(changelogConfig)){code}
> but not if you set {{emitEarlyWhenFull}} last.
> See comments in https://issues.apache.org/jira/browse/KAFKA-8147
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-12749) Changelog topic config on suppressed KTable lost

2021-05-10 Thread A. Sophie Blee-Goldman (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

A. Sophie Blee-Goldman reassigned KAFKA-12749:
--

Assignee: Viswanathan Ranganathan

> Changelog topic config on suppressed KTable lost
> 
>
> Key: KAFKA-12749
> URL: https://issues.apache.org/jira/browse/KAFKA-12749
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.6.0, 2.7.0, 2.8.0
>Reporter: Philip Bourke
>Assignee: Viswanathan Ranganathan
>Priority: Minor
>  Labels: newbie, newbie++
> Fix For: 3.0.0, 2.8.1
>
>
> When trying to set the changelog configuration on a suppressed KTable, the 
> config is lost if either {{emitEarlyWhenFull}} or {{shutDownWhenFull}} is set 
> after the logging config.
> This works - 
> {code:java}
> .suppress(Suppressed.untilTimeLimit(Duration.ofMillis(maxIdleIntervalMs), 
> BufferConfig.maxRecords(
>  
> maxBufferRecords).emitEarlyWhenFull().withLoggingEnabled(changelogConfig)){code}
> but not if you set {{emitEarlyWhenFull}} last.
> See comments in https://issues.apache.org/jira/browse/KAFKA-8147
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] ableegoldman commented on a change in pull request #10568: KAFKA-8897: Upgrade RocksDB to 6.19.3

2021-05-10 Thread GitBox


ableegoldman commented on a change in pull request #10568:
URL: https://github.com/apache/kafka/pull/10568#discussion_r629637264



##
File path: 
streams/src/main/java/org/apache/kafka/streams/state/internals/RocksDBGenericOptionsToDbOptionsColumnFamilyOptionsAdapter.java
##
@@ -1381,6 +1364,313 @@ public WriteBufferManager writeBufferManager() {
 return dbOptions.writeBufferManager();
 }
 
+@Override
+public Options setMaxWriteBatchGroupSizeBytes(final long 
maxWriteBatchGroupSizeBytes) {
+dbOptions.setMaxWriteBatchGroupSizeBytes(maxWriteBatchGroupSizeBytes);
+return this;
+}
+
+@Override
+public long maxWriteBatchGroupSizeBytes() {
+return dbOptions.maxWriteBatchGroupSizeBytes();
+}
+
+@Override
+public Options oldDefaults(final int majorVersion, final int minorVersion) 
{
+columnFamilyOptions.oldDefaults(majorVersion, minorVersion);
+return this;
+}
+
+@Override
+public Options optimizeForSmallDb(final Cache cache) {
+return super.optimizeForSmallDb(cache);
+}
+
+@Override
+public AbstractCompactionFilter> 
compactionFilter() {
+return columnFamilyOptions.compactionFilter();
+}
+
+@Override
+public AbstractCompactionFilterFactory> compactionFilterFactory() {
+return columnFamilyOptions.compactionFilterFactory();
+}
+
+@Override
+public Options setStatsPersistPeriodSec(final int statsPersistPeriodSec) {
+dbOptions.setStatsPersistPeriodSec(statsPersistPeriodSec);
+return this;
+}
+
+@Override
+public int statsPersistPeriodSec() {
+return dbOptions.statsPersistPeriodSec();
+}
+
+@Override
+public Options setStatsHistoryBufferSize(final long 
statsHistoryBufferSize) {
+dbOptions.setStatsHistoryBufferSize(statsHistoryBufferSize);
+return this;
+}
+
+@Override
+public long statsHistoryBufferSize() {
+return dbOptions.statsHistoryBufferSize();
+}
+
+@Override
+public Options setStrictBytesPerSync(final boolean strictBytesPerSync) {
+dbOptions.setStrictBytesPerSync(strictBytesPerSync);
+return this;
+}
+
+@Override
+public boolean strictBytesPerSync() {
+return dbOptions.strictBytesPerSync();
+}
+
+@Override
+public Options setListeners(final List listeners) {
+dbOptions.setListeners(listeners);
+return this;
+}
+
+@Override
+public List listeners() {
+return dbOptions.listeners();
+}
+
+@Override
+public Options setEnablePipelinedWrite(final boolean enablePipelinedWrite) 
{
+dbOptions.setEnablePipelinedWrite(enablePipelinedWrite);
+return this;
+}
+
+@Override
+public boolean enablePipelinedWrite() {
+return dbOptions.enablePipelinedWrite();
+}
+
+@Override
+public Options setUnorderedWrite(final boolean unorderedWrite) {
+dbOptions.setUnorderedWrite(unorderedWrite);
+return this;
+}
+
+@Override
+public boolean unorderedWrite() {
+return dbOptions.unorderedWrite();
+}
+
+@Override
+public Options setSkipCheckingSstFileSizesOnDbOpen(final boolean 
skipCheckingSstFileSizesOnDbOpen) {
+
dbOptions.setSkipCheckingSstFileSizesOnDbOpen(skipCheckingSstFileSizesOnDbOpen);
+return this;
+}
+
+@Override
+public boolean skipCheckingSstFileSizesOnDbOpen() {
+return dbOptions.skipCheckingSstFileSizesOnDbOpen();
+}
+
+@Override
+public Options setWalFilter(final AbstractWalFilter walFilter) {
+dbOptions.setWalFilter(walFilter);
+return this;
+}
+
+@Override
+public WalFilter walFilter() {
+return dbOptions.walFilter();
+}
+
+@Override
+public Options setAllowIngestBehind(final boolean allowIngestBehind) {
+dbOptions.setAllowIngestBehind(allowIngestBehind);
+return this;
+}
+
+@Override
+public boolean allowIngestBehind() {
+return dbOptions.allowIngestBehind();
+}
+
+@Override
+public Options setPreserveDeletes(final boolean preserveDeletes) {
+dbOptions.setPreserveDeletes(preserveDeletes);
+return this;
+}
+
+@Override
+public boolean preserveDeletes() {
+return dbOptions.preserveDeletes();
+}
+
+@Override
+public Options setTwoWriteQueues(final boolean twoWriteQueues) {
+dbOptions.setTwoWriteQueues(twoWriteQueues);
+return this;
+}
+
+@Override
+public boolean twoWriteQueues() {
+return dbOptions.twoWriteQueues();
+}
+
+@Override
+public Options setManualWalFlush(final boolean manualWalFlush) {
+dbOptions.setManualWalFlush(manualWalFlush);
+return this;
+}
+
+@Override
+public boolean manualWalFlush() {
+return dbOptions.manualWalFlush();
+}
+
+@Override
+public Options 

[jira] [Assigned] (KAFKA-12747) Flaky Test RocksDBStoreTest.shouldReturnUUIDsWithStringPrefix

2021-05-10 Thread Guozhang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang reassigned KAFKA-12747:
-

Fix Version/s: 3.0.0
 Assignee: Guozhang Wang
   Resolution: Fixed

> Flaky Test RocksDBStoreTest.shouldReturnUUIDsWithStringPrefix
> -
>
> Key: KAFKA-12747
> URL: https://issues.apache.org/jira/browse/KAFKA-12747
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: A. Sophie Blee-Goldman
>Assignee: Guozhang Wang
>Priority: Major
>  Labels: flaky-test, newbie, newbie++, unit-test
> Fix For: 3.0.0
>
>
> Stacktrace
> java.lang.AssertionError: 
> Expected: is <1>
>  but: was <2>
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:6)
>   at 
> org.apache.kafka.streams.state.internals.RocksDBStoreTest.shouldReturnUUIDsWithStringPrefix(RocksDBStoreTest.java:463)
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-10597/10/tests/



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] guozhangwang commented on pull request #10643: KAFKA-12747: Fix flakiness in shouldReturnUUIDsWithStringPrefix

2021-05-10 Thread GitBox


guozhangwang commented on pull request #10643:
URL: https://github.com/apache/kafka/pull/10643#issuecomment-837205952


   Will cherry-pick to old branches as well.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] guozhangwang merged pull request #10643: KAFKA-12747: Fix flakiness in shouldReturnUUIDsWithStringPrefix

2021-05-10 Thread GitBox


guozhangwang merged pull request #10643:
URL: https://github.com/apache/kafka/pull/10643


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] g1geordie commented on pull request #10663: KAFKA-12708 Rewrite org.apache.kafka.test.Microbenchmarks by JMH

2021-05-10 Thread GitBox


g1geordie commented on pull request #10663:
URL: https://github.com/apache/kafka/pull/10663#issuecomment-837173030


   @chia7712 can you help me take a look?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] g1geordie opened a new pull request #10663: KAFKA-12708 Rewrite org.apache.kafka.test.Microbenchmarks by JMH

2021-05-10 Thread GitBox


g1geordie opened a new pull request #10663:
URL: https://github.com/apache/kafka/pull/10663


   Rewrite org.apache.kafka.test.Microbenchmarks by JMH


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] wcarlson5 commented on pull request #10634: KAFKA-12754: Improve endOffsets for TaskMetadata

2021-05-10 Thread GitBox


wcarlson5 commented on pull request #10634:
URL: https://github.com/apache/kafka/pull/10634#issuecomment-837165677


   @ableegoldman Thanks for the review! I think I got to your comments. There 
was one miss understanding about how the `TaskMetadata` is updated but let me 
know if you still have questions about that. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] wcarlson5 commented on a change in pull request #10634: KAFKA-12754: Improve endOffsets for TaskMetadata

2021-05-10 Thread GitBox


wcarlson5 commented on a change in pull request #10634:
URL: https://github.com/apache/kafka/pull/10634#discussion_r629609542



##
File path: 
streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamThread.java
##
@@ -900,6 +902,14 @@ private long pollPhase() {
 
 final int numRecords = records.count();
 
+for (final TopicPartition topicPartition: records.partitions()) {
+records
+.records(topicPartition)
+.stream()
+.max(Comparator.comparing(ConsumerRecord::offset))
+.ifPresent(t -> 
taskManager.updateTaskEndMetadata(topicPartition, t.offset()));

Review comment:
   It was intended to be the highest offset that the client knows is in the 
topic. In order to make that the case I am looking the most recent pull now.
   
   I will add the java docs to the TaskMetadata getters




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] wcarlson5 commented on a change in pull request #10634: KAFKA-12754: Improve endOffsets for TaskMetadata

2021-05-10 Thread GitBox


wcarlson5 commented on a change in pull request #10634:
URL: https://github.com/apache/kafka/pull/10634#discussion_r629604506



##
File path: 
streams/src/main/java/org/apache/kafka/streams/processor/internals/TaskManager.java
##
@@ -1162,6 +1162,15 @@ private void updateTaskMetadata(final 
Map all
 }
 }
 
+public void updateTaskEndMetadata(final TopicPartition topicPartition, 
final Long offset) {
+for (final Task task: tasks.activeTasks()) {

Review comment:
   Thanks for pointing that out. I think I can fix a lot of these sort of 
methods that way




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] wcarlson5 commented on a change in pull request #10634: KAFKA-12754: Improve endOffsets for TaskMetadata

2021-05-10 Thread GitBox


wcarlson5 commented on a change in pull request #10634:
URL: https://github.com/apache/kafka/pull/10634#discussion_r629603915



##
File path: 
streams/src/main/java/org/apache/kafka/streams/processor/internals/StandbyTask.java
##
@@ -302,6 +302,11 @@ public void updateCommittedOffsets(final TopicPartition 
topicPartition, final Lo
 
 }
 
+@Override
+public void updateEndOffsets(final TopicPartition topicPartition, final 
Long offset) {
+

Review comment:
   I agree




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] wcarlson5 commented on a change in pull request #10634: KAFKA-12754: Improve endOffsets for TaskMetadata

2021-05-10 Thread GitBox


wcarlson5 commented on a change in pull request #10634:
URL: https://github.com/apache/kafka/pull/10634#discussion_r629599232



##
File path: 
streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamTask.java
##
@@ -1193,6 +1196,11 @@ public void updateCommittedOffsets(final TopicPartition 
topicPartition, final Lo
 committedOffsets.put(topicPartition, offset);
 }
 
+@Override
+public void updateEndOffsets(final TopicPartition topicPartition, final 
Long offset) {
+highWatermark.put(topicPartition, offset);

Review comment:
   StreamThread#updateThreadMetadata does not actually update the map what 
it does is link a object reference to the map in the Task to the TaskMetadata. 
This makes it so that whenever the TaskMetadata is fetched it has the most up 
to date maps. For that to matter we need to update the map more frequently




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (KAFKA-12708) Rewrite org.apache.kafka.test.Microbenchmarks by JMH

2021-05-10 Thread GeordieMai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

GeordieMai reassigned KAFKA-12708:
--

Assignee: GeordieMai

> Rewrite org.apache.kafka.test.Microbenchmarks by JMH
> 
>
> Key: KAFKA-12708
> URL: https://issues.apache.org/jira/browse/KAFKA-12708
> Project: Kafka
>  Issue Type: Task
>Reporter: Chia-Ping Tsai
>Assignee: GeordieMai
>Priority: Minor
>
> The benchmark code is a bit obsolete and it would be better to rewrite it by 
> JMH



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] wcarlson5 commented on a change in pull request #10634: KAFKA-12754: Improve endOffsets for TaskMetadata

2021-05-10 Thread GitBox


wcarlson5 commented on a change in pull request #10634:
URL: https://github.com/apache/kafka/pull/10634#discussion_r629595406



##
File path: 
streams/src/test/java/org/apache/kafka/streams/integration/TaskMetadataIntegrationTest.java
##
@@ -0,0 +1,200 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.kafka.streams.integration;
+
+import org.apache.kafka.common.TopicPartition;
+import org.apache.kafka.common.serialization.Serdes;
+import org.apache.kafka.common.serialization.StringSerializer;
+import org.apache.kafka.streams.KafkaStreams;
+import org.apache.kafka.streams.KeyValue;
+import org.apache.kafka.streams.StreamsBuilder;
+import org.apache.kafka.streams.StreamsConfig;
+import org.apache.kafka.streams.integration.utils.EmbeddedKafkaCluster;
+import org.apache.kafka.streams.integration.utils.IntegrationTestUtils;
+import org.apache.kafka.streams.kstream.KStream;
+import org.apache.kafka.streams.processor.AbstractProcessor;
+import org.apache.kafka.streams.processor.TaskMetadata;
+import org.apache.kafka.test.IntegrationTest;
+import org.apache.kafka.test.TestUtils;
+import org.junit.After;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.junit.rules.TestName;
+
+import java.io.IOException;
+import java.time.Duration;
+import java.util.Collections;
+import java.util.List;
+import java.util.Properties;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.stream.Collectors;
+
+import static org.apache.kafka.common.utils.Utils.mkEntry;
+import static org.apache.kafka.common.utils.Utils.mkMap;
+import static org.apache.kafka.common.utils.Utils.mkObjectProperties;
+import static 
org.apache.kafka.streams.integration.utils.IntegrationTestUtils.purgeLocalStreamsState;
+import static 
org.apache.kafka.streams.integration.utils.IntegrationTestUtils.safeUniqueTestName;
+import static org.hamcrest.CoreMatchers.equalTo;
+import static org.hamcrest.MatcherAssert.assertThat;
+
+@Category(IntegrationTest.class)
+public class TaskMetadataIntegrationTest {
+
+public static final EmbeddedKafkaCluster CLUSTER = new 
EmbeddedKafkaCluster(1, new Properties(), 0L, 0L);
+
+@BeforeClass
+public static void startCluster() throws IOException {
+CLUSTER.start();
+}
+
+@AfterClass
+public static void closeCluster() {
+CLUSTER.stop();
+}
+public static final Duration DEFAULT_DURATION = Duration.ofSeconds(30);
+
+@Rule
+public TestName testName = new TestName();
+
+private String inputTopic;
+private static StreamsBuilder builder;
+private static Properties properties;
+private static String appId = "";
+private AtomicBoolean process;
+private AtomicBoolean commit;
+
+@Before
+public void setup() {
+final String testId = safeUniqueTestName(getClass(), testName);
+appId = "appId_" + testId;
+inputTopic = "input" + testId;
+IntegrationTestUtils.cleanStateBeforeTest(CLUSTER, inputTopic);
+
+builder  = new StreamsBuilder();
+
+process = new AtomicBoolean(true);
+commit = new AtomicBoolean(true);
+
+final KStream stream = builder.stream(inputTopic);
+stream.process(PauseProcessor::new);
+
+properties  = mkObjectProperties(
+mkMap(
+mkEntry(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, 
CLUSTER.bootstrapServers()),
+mkEntry(StreamsConfig.APPLICATION_ID_CONFIG, appId),
+mkEntry(StreamsConfig.STATE_DIR_CONFIG, 
TestUtils.tempDirectory().getPath()),
+mkEntry(StreamsConfig.NUM_STREAM_THREADS_CONFIG, 2),
+mkEntry(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, 
Serdes.StringSerde.class),
+
mkEntry(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, 
Serdes.StringSerde.class),
+mkEntry(StreamsConfig.COMMIT_INTERVAL_MS_CONFIG, 1L)
+)
+);
+}
+
+@Test
+public void shouldReportCorrectCommittedOffsetInformation() {
+  

[GitHub] [kafka] wcarlson5 commented on a change in pull request #10634: KAFKA-12754: Improve endOffsets for TaskMetadata

2021-05-10 Thread GitBox


wcarlson5 commented on a change in pull request #10634:
URL: https://github.com/apache/kafka/pull/10634#discussion_r629592700



##
File path: 
streams/src/test/java/org/apache/kafka/streams/integration/TaskMetadataIntegrationTest.java
##
@@ -0,0 +1,200 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.kafka.streams.integration;
+
+import org.apache.kafka.common.TopicPartition;
+import org.apache.kafka.common.serialization.Serdes;
+import org.apache.kafka.common.serialization.StringSerializer;
+import org.apache.kafka.streams.KafkaStreams;
+import org.apache.kafka.streams.KeyValue;
+import org.apache.kafka.streams.StreamsBuilder;
+import org.apache.kafka.streams.StreamsConfig;
+import org.apache.kafka.streams.integration.utils.EmbeddedKafkaCluster;
+import org.apache.kafka.streams.integration.utils.IntegrationTestUtils;
+import org.apache.kafka.streams.kstream.KStream;
+import org.apache.kafka.streams.processor.AbstractProcessor;
+import org.apache.kafka.streams.processor.TaskMetadata;
+import org.apache.kafka.test.IntegrationTest;
+import org.apache.kafka.test.TestUtils;
+import org.junit.After;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.junit.rules.TestName;
+
+import java.io.IOException;
+import java.time.Duration;
+import java.util.Collections;
+import java.util.List;
+import java.util.Properties;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.stream.Collectors;
+
+import static org.apache.kafka.common.utils.Utils.mkEntry;
+import static org.apache.kafka.common.utils.Utils.mkMap;
+import static org.apache.kafka.common.utils.Utils.mkObjectProperties;
+import static 
org.apache.kafka.streams.integration.utils.IntegrationTestUtils.purgeLocalStreamsState;
+import static 
org.apache.kafka.streams.integration.utils.IntegrationTestUtils.safeUniqueTestName;
+import static org.hamcrest.CoreMatchers.equalTo;
+import static org.hamcrest.MatcherAssert.assertThat;
+
+@Category(IntegrationTest.class)
+public class TaskMetadataIntegrationTest {
+
+public static final EmbeddedKafkaCluster CLUSTER = new 
EmbeddedKafkaCluster(1, new Properties(), 0L, 0L);
+
+@BeforeClass
+public static void startCluster() throws IOException {
+CLUSTER.start();
+}
+
+@AfterClass
+public static void closeCluster() {
+CLUSTER.stop();
+}
+public static final Duration DEFAULT_DURATION = Duration.ofSeconds(30);
+
+@Rule
+public TestName testName = new TestName();
+
+private String inputTopic;
+private static StreamsBuilder builder;
+private static Properties properties;
+private static String appId = "";
+private AtomicBoolean process;
+private AtomicBoolean commit;
+
+@Before
+public void setup() {
+final String testId = safeUniqueTestName(getClass(), testName);
+appId = "appId_" + testId;
+inputTopic = "input" + testId;
+IntegrationTestUtils.cleanStateBeforeTest(CLUSTER, inputTopic);
+
+builder  = new StreamsBuilder();
+
+process = new AtomicBoolean(true);

Review comment:
   process makes more sense to me, because the flag controls the processing 
as well




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] mumrah commented on a change in pull request #10431: KAFKA-12543: Change RawSnapshotReader ownership model

2021-05-10 Thread GitBox


mumrah commented on a change in pull request #10431:
URL: https://github.com/apache/kafka/pull/10431#discussion_r629541231



##
File path: core/src/main/scala/kafka/raft/KafkaMetadataLog.scala
##
@@ -161,19 +162,24 @@ final class KafkaMetadataLog private (
 
   override def truncateToLatestSnapshot(): Boolean = {
 val latestEpoch = log.latestEpoch.getOrElse(0)
-latestSnapshotId().asScala match {
-  case Some(snapshotId) if (snapshotId.epoch > latestEpoch ||
-(snapshotId.epoch == latestEpoch && snapshotId.offset > 
endOffset().offset)) =>
+val (truncated, forgottenSnapshots) = latestSnapshotId().asScala match {

Review comment:
   Should we grab the `snapshots` lock for this whole match expression like 
we do in deleteBeforeSnapshot? Is there possible a race between this block and 
deleteBeforeSnapshot?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] mumrah commented on a change in pull request #10431: KAFKA-12543: Change RawSnapshotReader ownership model

2021-05-10 Thread GitBox


mumrah commented on a change in pull request #10431:
URL: https://github.com/apache/kafka/pull/10431#discussion_r629541231



##
File path: core/src/main/scala/kafka/raft/KafkaMetadataLog.scala
##
@@ -161,19 +162,24 @@ final class KafkaMetadataLog private (
 
   override def truncateToLatestSnapshot(): Boolean = {
 val latestEpoch = log.latestEpoch.getOrElse(0)
-latestSnapshotId().asScala match {
-  case Some(snapshotId) if (snapshotId.epoch > latestEpoch ||
-(snapshotId.epoch == latestEpoch && snapshotId.offset > 
endOffset().offset)) =>
+val (truncated, forgottenSnapshots) = latestSnapshotId().asScala match {

Review comment:
   Should we grab the `snapshots` lock for this while match expression like 
we do in deleteBeforeSnapshot? Is there possible a race between this block and 
deleteBeforeSnapshot?

##
File path: core/src/main/scala/kafka/raft/KafkaMetadataLog.scala
##
@@ -242,85 +248,116 @@ final class KafkaMetadataLog private (
   }
 
   override def readSnapshot(snapshotId: OffsetAndEpoch): 
Optional[RawSnapshotReader] = {
-try {
-  if (snapshotIds.contains(snapshotId)) {
-Optional.of(FileRawSnapshotReader.open(log.dir.toPath, snapshotId))
-  } else {
-Optional.empty()
+snapshots synchronized {
+  val reader = snapshots.get(snapshotId) match {
+case None =>
+  // Snapshot doesn't exists
+  None
+case Some(None) =>
+  // Snapshot exists but has never been read before
+  try {
+val snapshotReader = 
Some(FileRawSnapshotReader.open(log.dir.toPath, snapshotId))
+snapshots.put(snapshotId, snapshotReader)
+snapshotReader
+  } catch {
+case _: NoSuchFileException =>
+  // Snapshot doesn't exists in the data dir; remove
+  val path = Snapshots.snapshotPath(log.dir.toPath, snapshotId)
+  warn(s"Couldn't read $snapshotId; expected to find snapshot file 
$path")
+  snapshots.remove(snapshotId)
+  None
+  }
+case Some(value) =>
+  // Snapshot exists and it is already open; do nothing
+  value
   }
-} catch {
-  case _: NoSuchFileException =>
-Optional.empty()
+
+  reader.asJava.asInstanceOf[Optional[RawSnapshotReader]]
 }
   }
 
   override def latestSnapshotId(): Optional[OffsetAndEpoch] = {
-val descending = snapshotIds.descendingIterator
-if (descending.hasNext) {
-  Optional.of(descending.next)
-} else {
-  Optional.empty()
+snapshots synchronized {
+  snapshots.lastOption.map { case (snapshotId, _) => snapshotId }.asJava
 }
   }
 
   override def earliestSnapshotId(): Optional[OffsetAndEpoch] = {
-val ascendingIterator = snapshotIds.iterator
-if (ascendingIterator.hasNext) {
-  Optional.of(ascendingIterator.next)
-} else {
-  Optional.empty()
+snapshots synchronized {
+  snapshots.headOption.map { case (snapshotId, _) => snapshotId }.asJava
 }
   }
 
   override def onSnapshotFrozen(snapshotId: OffsetAndEpoch): Unit = {
-snapshotIds.add(snapshotId)
+snapshots synchronized {
+  snapshots.put(snapshotId, None)
+}
   }
 
   override def deleteBeforeSnapshot(logStartSnapshotId: OffsetAndEpoch): 
Boolean = {
-latestSnapshotId().asScala match {
-  case Some(snapshotId) if (snapshotIds.contains(logStartSnapshotId) &&
-startOffset < logStartSnapshotId.offset &&
-logStartSnapshotId.offset <= snapshotId.offset &&
-log.maybeIncrementLogStartOffset(logStartSnapshotId.offset, 
SnapshotGenerated)) =>
-log.deleteOldSegments()
+val (deleted, forgottenSnapshots) = snapshots synchronized {
+  latestSnapshotId().asScala match {
+case Some(snapshotId) if (snapshots.contains(logStartSnapshotId) &&
+  startOffset < logStartSnapshotId.offset &&
+  logStartSnapshotId.offset <= snapshotId.offset &&
+  log.maybeIncrementLogStartOffset(logStartSnapshotId.offset, 
SnapshotGenerated)) =>
+
+  // Delete all segments that have a "last offset" less than the log 
start offset
+  log.deleteOldSegments()
 
-// Delete snapshot after increasing LogStartOffset
-removeSnapshotFilesBefore(logStartSnapshotId)
+  // Forget snapshots less than the log start offset
+  (true, forgetSnapshotsBefore(logStartSnapshotId))
+case _ =>
+  (false, mutable.TreeMap.empty[OffsetAndEpoch, 
Option[FileRawSnapshotReader]])
+  }
+}
 
-true
+removeSnapshots(forgottenSnapshots)
+deleted
+  }
 
-  case _ => false
-}
+  /**
+   * Forget the snapshots earlier than a given snapshot id and return the 
associated
+   * snapshot readers.
+   *
+   * This method assumes that the lock for `snapshots` is ready held.
+   */
+  @nowarn("cat=deprecation") // Needed for TreeMap.until
+  private def 

[GitHub] [kafka] lbradstreet commented on a change in pull request #10620: KAFKA-12736: KafkaProducer.flush holds onto completed ProducerBatch(s) until flush completes

2021-05-10 Thread GitBox


lbradstreet commented on a change in pull request #10620:
URL: https://github.com/apache/kafka/pull/10620#discussion_r629543903



##
File path: 
clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java
##
@@ -710,8 +710,11 @@ private boolean appendsInProgress() {
  */
 public void awaitFlushCompletion() throws InterruptedException {
 try {
-for (ProducerBatch batch : this.incomplete.copyAll())
-batch.produceFuture.await();
+// Make a copy of of the request results at the time the flush is 
called.
+// We avoid making a copy of the full incomplete batch collections 
to allow
+// garbage collection.

Review comment:
   I've tried to improve the error message. LMK what you think.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] cadonna commented on a change in pull request #10568: KAFKA-8897: Upgrade RocksDB to 6.19.3

2021-05-10 Thread GitBox


cadonna commented on a change in pull request #10568:
URL: https://github.com/apache/kafka/pull/10568#discussion_r629541422



##
File path: 
streams/src/main/java/org/apache/kafka/streams/state/internals/RocksDBGenericOptionsToDbOptionsColumnFamilyOptionsAdapter.java
##
@@ -1381,6 +1364,313 @@ public WriteBufferManager writeBufferManager() {
 return dbOptions.writeBufferManager();
 }
 
+@Override
+public Options setMaxWriteBatchGroupSizeBytes(final long 
maxWriteBatchGroupSizeBytes) {
+dbOptions.setMaxWriteBatchGroupSizeBytes(maxWriteBatchGroupSizeBytes);
+return this;
+}
+
+@Override
+public long maxWriteBatchGroupSizeBytes() {
+return dbOptions.maxWriteBatchGroupSizeBytes();
+}
+
+@Override
+public Options oldDefaults(final int majorVersion, final int minorVersion) 
{
+columnFamilyOptions.oldDefaults(majorVersion, minorVersion);
+return this;
+}
+
+@Override
+public Options optimizeForSmallDb(final Cache cache) {
+return super.optimizeForSmallDb(cache);
+}
+
+@Override
+public AbstractCompactionFilter> 
compactionFilter() {
+return columnFamilyOptions.compactionFilter();
+}
+
+@Override
+public AbstractCompactionFilterFactory> compactionFilterFactory() {
+return columnFamilyOptions.compactionFilterFactory();
+}
+
+@Override
+public Options setStatsPersistPeriodSec(final int statsPersistPeriodSec) {
+dbOptions.setStatsPersistPeriodSec(statsPersistPeriodSec);
+return this;
+}
+
+@Override
+public int statsPersistPeriodSec() {
+return dbOptions.statsPersistPeriodSec();
+}
+
+@Override
+public Options setStatsHistoryBufferSize(final long 
statsHistoryBufferSize) {
+dbOptions.setStatsHistoryBufferSize(statsHistoryBufferSize);
+return this;
+}
+
+@Override
+public long statsHistoryBufferSize() {
+return dbOptions.statsHistoryBufferSize();
+}
+
+@Override
+public Options setStrictBytesPerSync(final boolean strictBytesPerSync) {
+dbOptions.setStrictBytesPerSync(strictBytesPerSync);
+return this;
+}
+
+@Override
+public boolean strictBytesPerSync() {
+return dbOptions.strictBytesPerSync();
+}
+
+@Override
+public Options setListeners(final List listeners) {
+dbOptions.setListeners(listeners);
+return this;
+}
+
+@Override
+public List listeners() {
+return dbOptions.listeners();
+}
+
+@Override
+public Options setEnablePipelinedWrite(final boolean enablePipelinedWrite) 
{
+dbOptions.setEnablePipelinedWrite(enablePipelinedWrite);
+return this;
+}
+
+@Override
+public boolean enablePipelinedWrite() {
+return dbOptions.enablePipelinedWrite();
+}
+
+@Override
+public Options setUnorderedWrite(final boolean unorderedWrite) {
+dbOptions.setUnorderedWrite(unorderedWrite);
+return this;
+}
+
+@Override
+public boolean unorderedWrite() {
+return dbOptions.unorderedWrite();
+}
+
+@Override
+public Options setSkipCheckingSstFileSizesOnDbOpen(final boolean 
skipCheckingSstFileSizesOnDbOpen) {
+
dbOptions.setSkipCheckingSstFileSizesOnDbOpen(skipCheckingSstFileSizesOnDbOpen);
+return this;
+}
+
+@Override
+public boolean skipCheckingSstFileSizesOnDbOpen() {
+return dbOptions.skipCheckingSstFileSizesOnDbOpen();
+}
+
+@Override
+public Options setWalFilter(final AbstractWalFilter walFilter) {
+dbOptions.setWalFilter(walFilter);
+return this;
+}
+
+@Override
+public WalFilter walFilter() {
+return dbOptions.walFilter();
+}
+
+@Override
+public Options setAllowIngestBehind(final boolean allowIngestBehind) {
+dbOptions.setAllowIngestBehind(allowIngestBehind);
+return this;
+}
+
+@Override
+public boolean allowIngestBehind() {
+return dbOptions.allowIngestBehind();
+}
+
+@Override
+public Options setPreserveDeletes(final boolean preserveDeletes) {
+dbOptions.setPreserveDeletes(preserveDeletes);
+return this;
+}
+
+@Override
+public boolean preserveDeletes() {
+return dbOptions.preserveDeletes();
+}
+
+@Override
+public Options setTwoWriteQueues(final boolean twoWriteQueues) {
+dbOptions.setTwoWriteQueues(twoWriteQueues);
+return this;
+}
+
+@Override
+public boolean twoWriteQueues() {
+return dbOptions.twoWriteQueues();
+}
+
+@Override
+public Options setManualWalFlush(final boolean manualWalFlush) {
+dbOptions.setManualWalFlush(manualWalFlush);
+return this;
+}
+
+@Override
+public boolean manualWalFlush() {
+return dbOptions.manualWalFlush();
+}
+
+@Override
+public Options setCfPaths(final 

[GitHub] [kafka] guozhangwang commented on a change in pull request #10568: KAFKA-8897: Upgrade RocksDB to 6.19.3

2021-05-10 Thread GitBox


guozhangwang commented on a change in pull request #10568:
URL: https://github.com/apache/kafka/pull/10568#discussion_r629538083



##
File path: 
streams/src/main/java/org/apache/kafka/streams/state/internals/RocksDBGenericOptionsToDbOptionsColumnFamilyOptionsAdapter.java
##
@@ -1381,6 +1364,313 @@ public WriteBufferManager writeBufferManager() {
 return dbOptions.writeBufferManager();
 }
 
+@Override
+public Options setMaxWriteBatchGroupSizeBytes(final long 
maxWriteBatchGroupSizeBytes) {
+dbOptions.setMaxWriteBatchGroupSizeBytes(maxWriteBatchGroupSizeBytes);
+return this;
+}
+
+@Override
+public long maxWriteBatchGroupSizeBytes() {
+return dbOptions.maxWriteBatchGroupSizeBytes();
+}
+
+@Override
+public Options oldDefaults(final int majorVersion, final int minorVersion) 
{
+columnFamilyOptions.oldDefaults(majorVersion, minorVersion);
+return this;
+}
+
+@Override
+public Options optimizeForSmallDb(final Cache cache) {
+return super.optimizeForSmallDb(cache);
+}
+
+@Override
+public AbstractCompactionFilter> 
compactionFilter() {
+return columnFamilyOptions.compactionFilter();
+}
+
+@Override
+public AbstractCompactionFilterFactory> compactionFilterFactory() {
+return columnFamilyOptions.compactionFilterFactory();
+}
+
+@Override
+public Options setStatsPersistPeriodSec(final int statsPersistPeriodSec) {
+dbOptions.setStatsPersistPeriodSec(statsPersistPeriodSec);
+return this;
+}
+
+@Override
+public int statsPersistPeriodSec() {
+return dbOptions.statsPersistPeriodSec();
+}
+
+@Override
+public Options setStatsHistoryBufferSize(final long 
statsHistoryBufferSize) {
+dbOptions.setStatsHistoryBufferSize(statsHistoryBufferSize);
+return this;
+}
+
+@Override
+public long statsHistoryBufferSize() {
+return dbOptions.statsHistoryBufferSize();
+}
+
+@Override
+public Options setStrictBytesPerSync(final boolean strictBytesPerSync) {
+dbOptions.setStrictBytesPerSync(strictBytesPerSync);
+return this;
+}
+
+@Override
+public boolean strictBytesPerSync() {
+return dbOptions.strictBytesPerSync();
+}
+
+@Override
+public Options setListeners(final List listeners) {
+dbOptions.setListeners(listeners);
+return this;
+}
+
+@Override
+public List listeners() {
+return dbOptions.listeners();
+}
+
+@Override
+public Options setEnablePipelinedWrite(final boolean enablePipelinedWrite) 
{
+dbOptions.setEnablePipelinedWrite(enablePipelinedWrite);
+return this;
+}
+
+@Override
+public boolean enablePipelinedWrite() {
+return dbOptions.enablePipelinedWrite();
+}
+
+@Override
+public Options setUnorderedWrite(final boolean unorderedWrite) {
+dbOptions.setUnorderedWrite(unorderedWrite);
+return this;
+}
+
+@Override
+public boolean unorderedWrite() {
+return dbOptions.unorderedWrite();
+}
+
+@Override
+public Options setSkipCheckingSstFileSizesOnDbOpen(final boolean 
skipCheckingSstFileSizesOnDbOpen) {
+
dbOptions.setSkipCheckingSstFileSizesOnDbOpen(skipCheckingSstFileSizesOnDbOpen);
+return this;
+}
+
+@Override
+public boolean skipCheckingSstFileSizesOnDbOpen() {
+return dbOptions.skipCheckingSstFileSizesOnDbOpen();
+}
+
+@Override
+public Options setWalFilter(final AbstractWalFilter walFilter) {
+dbOptions.setWalFilter(walFilter);
+return this;
+}
+
+@Override
+public WalFilter walFilter() {
+return dbOptions.walFilter();
+}
+
+@Override
+public Options setAllowIngestBehind(final boolean allowIngestBehind) {
+dbOptions.setAllowIngestBehind(allowIngestBehind);
+return this;
+}
+
+@Override
+public boolean allowIngestBehind() {
+return dbOptions.allowIngestBehind();
+}
+
+@Override
+public Options setPreserveDeletes(final boolean preserveDeletes) {
+dbOptions.setPreserveDeletes(preserveDeletes);
+return this;
+}
+
+@Override
+public boolean preserveDeletes() {
+return dbOptions.preserveDeletes();
+}
+
+@Override
+public Options setTwoWriteQueues(final boolean twoWriteQueues) {
+dbOptions.setTwoWriteQueues(twoWriteQueues);
+return this;
+}
+
+@Override
+public boolean twoWriteQueues() {
+return dbOptions.twoWriteQueues();
+}
+
+@Override
+public Options setManualWalFlush(final boolean manualWalFlush) {
+dbOptions.setManualWalFlush(manualWalFlush);
+return this;
+}
+
+@Override
+public boolean manualWalFlush() {
+return dbOptions.manualWalFlush();
+}
+
+@Override
+public Options 

[jira] [Commented] (KAFKA-12757) Move server related common and public classes into separate module(s).

2021-05-10 Thread Jun Rao (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17342023#comment-17342023
 ] 

Jun Rao commented on KAFKA-12757:
-

[~satish.duggana]: The implementation of server side APIs may depend on common 
classes. Will the common classes ever depend on APIs?

> Move server related common and public classes into separate module(s).
> --
>
> Key: KAFKA-12757
> URL: https://issues.apache.org/jira/browse/KAFKA-12757
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Satish Duggana
>Priority: Major
>
> There are two sets of classes that we want to pull out here for server 
> common/api classes.
> 1. All the common classes used by server modules.
>  2. All the public server side classes exposed to users. Some of these 
> classes are storage-api, security, quotas etc.
> Couple of approaches that we can consider here:
> 1. Both sets of classes will be added in server-common module but common 
> classes will have a package prefix as org.apache.kafka.server.common. But 
> public classes will continue to have the existing package names. We will 
> generate javadocs for these public classes.
>  Pros
>  - Users and server modules will be depdent on a single module for both 
> common and public apis.
>  - Both common classes and api classes can be dependent on each other.
> Cons
>  - Not a clean separation between common classes and public classes.
> 2. Common classes used by server modules will be added in server-common 
> module. We will create another module called server-api for server side 
> public classes.
> Pros
>  - It gives a neat separation between common and public classes.
>  - Maintaining the growth of these modules will be easy.
> Cons
>  - We can not have common and api classes to be dependent on each other which 
> will cause circular dependency.
>  
> Please feel free to modify/add other approaches in modularizing these 
> classes. 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] mumrah commented on a change in pull request #10431: KAFKA-12543: Change RawSnapshotReader ownership model

2021-05-10 Thread GitBox


mumrah commented on a change in pull request #10431:
URL: https://github.com/apache/kafka/pull/10431#discussion_r629531449



##
File path: 
raft/src/main/java/org/apache/kafka/snapshot/FileRawSnapshotReader.java
##
@@ -54,8 +54,12 @@ public Records records() {
 }
 
 @Override
-public void close() throws IOException {
-fileRecords.close();
+public void close() {

Review comment:
   Ok, sounds good. If we're strictly dealing with IOExceptions, maybe we 
can use UncheckedIOException?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] guozhangwang commented on a change in pull request #10643: KAFKA-12747: Fix flakiness in shouldReturnUUIDsWithStringPrefix

2021-05-10 Thread GitBox


guozhangwang commented on a change in pull request #10643:
URL: https://github.com/apache/kafka/pull/10643#discussion_r629530576



##
File path: 
streams/src/test/java/org/apache/kafka/streams/state/internals/RocksDBStoreTest.java
##
@@ -451,17 +453,14 @@ public void shouldReturnUUIDsWithStringPrefix() {
 rocksDBStore.flush();
 
 final KeyValueIterator keysWithPrefix = 
rocksDBStore.prefixScan(prefix, stringSerializer);
-final List valuesWithPrefix = new ArrayList<>();
 int numberOfKeysReturned = 0;
 
 while (keysWithPrefix.hasNext()) {
-final KeyValue next = keysWithPrefix.next();
-valuesWithPrefix.add(new String(next.value));
+keysWithPrefix.next();
 numberOfKeysReturned++;
 }
 
-assertThat(numberOfKeysReturned, is(1));
-assertThat(valuesWithPrefix.get(0), is("a"));

Review comment:
   SG, I will do that upon merging.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] jlprat commented on pull request #10662: KAFKA-12747: Fix flaky test RocksDBStoreTest.shouldReturnUUIDsWithStr…

2021-05-10 Thread GitBox


jlprat commented on pull request #10662:
URL: https://github.com/apache/kafka/pull/10662#issuecomment-836981716


   The other way around, I haven't seen your PR, you were first 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] guozhangwang commented on a change in pull request #10552: KAFKA-12675: improve the sticky general assignor scalability and performance

2021-05-10 Thread GitBox


guozhangwang commented on a change in pull request #10552:
URL: https://github.com/apache/kafka/pull/10552#discussion_r629529360



##
File path: 
clients/src/main/java/org/apache/kafka/clients/consumer/internals/AbstractStickyAssignor.java
##
@@ -387,58 +403,125 @@ private boolean allSubscriptionsEqual(Set 
allTopics,
 TreeSet sortedCurrentSubscriptions = new TreeSet<>(new 
SubscriptionComparator(currentAssignment));
 sortedCurrentSubscriptions.addAll(currentAssignment.keySet());
 
+int totalPartitionCount = 
partitionsPerTopic.values().stream().reduce(0, Integer::sum);
+
 balance(currentAssignment, prevAssignment, sortedPartitions, 
unassignedPartitions, sortedCurrentSubscriptions,
-consumer2AllPotentialPartitions, partition2AllPotentialConsumers, 
currentPartitionConsumer, revocationRequired);
+consumer2AllPotentialTopics, topic2AllPotentialConsumers, 
currentPartitionConsumer, revocationRequired,
+partitionsPerTopic, totalPartitionCount);
+
+if (log.isDebugEnabled()) {
+log.debug("final assignment: " + currentAssignment);
+}
+
 return currentAssignment;
 }
 
+/**
+ * get the unassigned partition list by computing the difference set of 
the sortedPartitions(all partitions)
+ * and sortedToBeRemovedPartitions. We use two pointers technique here:
+ *
+ * We loop the sortedPartition, and compare the ith element in sorted 
toBeRemovedPartitions(i start from 0):
+ *   - if not equal to the ith element, add to unassignedPartitions
+ *   - if equal to the the ith element, get next element from 
sortedToBeRemovedPartitions
+ *
+ * @param sortedPartitions: sorted all partitions
+ * @param sortedToBeRemovedPartitions: sorted partitions, all are included 
in the sortedPartitions
+ * @return the partitions don't assign to any current consumers
+ */
+private List getUnassignedPartitions(List 
sortedPartitions,
+ List 
sortedToBeRemovedPartitions) {
+List unassignedPartitions = new ArrayList<>();
+
+int index = 0;
+boolean shouldAddDirectly = false;
+int sizeToBeRemovedPartitions = sortedToBeRemovedPartitions.size();
+TopicPartition nextPartition = sortedToBeRemovedPartitions.get(index);
+for (TopicPartition topicPartition : sortedPartitions) {
+if (shouldAddDirectly || !nextPartition.equals(topicPartition)) {
+unassignedPartitions.add(topicPartition);
+} else {
+// equal case, don't add to unassignedPartitions, just get 
next partition
+if (index < sizeToBeRemovedPartitions - 1) {
+nextPartition = sortedToBeRemovedPartitions.get(++index);
+} else {
+// add the remaining directly since there is no more 
toBeRemovedPartitions
+shouldAddDirectly = true;
+}
+}
+}
+return unassignedPartitions;
+}
+
+/**
+ * update the prevAssignment with the partitions, consumer and generation 
in parameters
+ *
+ * @param partitions: The partitions to be updated the prevAssignement
+ * @param consumer: The consumer Id
+ * @param prevAssignment: The assignment contains the assignment with the 
2nd largest generation
+ * @param generation: The generation of this assignment (partitions)
+ */
+private void updatePrevAssignment(Map prevAssignment,
+  List partitions,
+  String consumer,
+  int generation) {
+for (TopicPartition partition: partitions) {
+ConsumerGenerationPair consumerGeneration = 
prevAssignment.get(partition);
+if (consumerGeneration != null) {
+// only keep the latest previous assignment
+if (generation > consumerGeneration.generation)
+prevAssignment.put(partition, new 
ConsumerGenerationPair(consumer, generation));
+} else {
+prevAssignment.put(partition, new 
ConsumerGenerationPair(consumer, generation));
+}
+}
+}
+
+/**
+ * filling in the currentAssignment and prevAssignment from the 
subscriptions.
+ *
+ * @param subscriptions: Map from the member id to their respective topic 
subscription
+ * @param currentAssignment: The assignment contains the assignments with 
the largest generation
+ * @param prevAssignment: The assignment contains the assignment with the 
2nd largest generation
+ */
 private void prepopulateCurrentAssignments(Map 
subscriptions,
Map> currentAssignment,
Map prevAssignment) {
 // we need to process subscriptions' user 

[GitHub] [kafka] mumrah commented on a change in pull request #10431: KAFKA-12543: Change RawSnapshotReader ownership model

2021-05-10 Thread GitBox


mumrah commented on a change in pull request #10431:
URL: https://github.com/apache/kafka/pull/10431#discussion_r629529239



##
File path: core/src/main/scala/kafka/raft/KafkaMetadataLog.scala
##
@@ -242,85 +246,116 @@ final class KafkaMetadataLog private (
   }
 
   override def readSnapshot(snapshotId: OffsetAndEpoch): 
Optional[RawSnapshotReader] = {
-try {
-  if (snapshotIds.contains(snapshotId)) {
-Optional.of(FileRawSnapshotReader.open(log.dir.toPath, snapshotId))
-  } else {
-Optional.empty()
+snapshots synchronized {
+  val reader = snapshots.get(snapshotId) match {
+case None =>
+  // Snapshot doesn't exists
+  None
+case Some(None) =>
+  // Snapshot exists but has never been read before
+  try {
+val snapshotReader = 
Some(FileRawSnapshotReader.open(log.dir.toPath, snapshotId))
+snapshots.put(snapshotId, snapshotReader)
+snapshotReader
+  } catch {
+case _: NoSuchFileException =>
+  // Snapshot doesn't exists in the data dir; remove
+  val path = Snapshots.snapshotPath(log.dir.toPath, snapshotId)
+  warn(s"Couldn't read $snapshotId; expected to find snapshot file 
$path")
+  snapshots.remove(snapshotId)
+  None
+  }
+case Some(value) =>
+  // Snapshot exists and it is already open; do nothing
+  value
   }
-} catch {
-  case _: NoSuchFileException =>
-Optional.empty()
+
+  reader.asJava.asInstanceOf[Optional[RawSnapshotReader]]
 }
   }
 
   override def latestSnapshotId(): Optional[OffsetAndEpoch] = {
-val descending = snapshotIds.descendingIterator
-if (descending.hasNext) {
-  Optional.of(descending.next)
-} else {
-  Optional.empty()
+snapshots synchronized {
+  snapshots.lastOption.map { case (snapshotId, _) => snapshotId }.asJava
 }
   }
 
   override def earliestSnapshotId(): Optional[OffsetAndEpoch] = {
-val ascendingIterator = snapshotIds.iterator
-if (ascendingIterator.hasNext) {
-  Optional.of(ascendingIterator.next)
-} else {
-  Optional.empty()
+snapshots synchronized {
+  snapshots.headOption.map { case (snapshotId, _) => snapshotId }.asJava
 }
   }
 
   override def onSnapshotFrozen(snapshotId: OffsetAndEpoch): Unit = {
-snapshotIds.add(snapshotId)
+snapshots synchronized {
+  snapshots.put(snapshotId, None)
+}
   }
 
   override def deleteBeforeSnapshot(logStartSnapshotId: OffsetAndEpoch): 
Boolean = {
-latestSnapshotId().asScala match {
-  case Some(snapshotId) if (snapshotIds.contains(logStartSnapshotId) &&
-startOffset < logStartSnapshotId.offset &&
-logStartSnapshotId.offset <= snapshotId.offset &&
-log.maybeIncrementLogStartOffset(logStartSnapshotId.offset, 
SnapshotGenerated)) =>
-log.deleteOldSegments()
+val (deleted, forgottenSnapshots) = snapshots synchronized {
+  latestSnapshotId().asScala match {
+case Some(snapshotId) if (snapshots.contains(logStartSnapshotId) &&
+  startOffset < logStartSnapshotId.offset &&
+  logStartSnapshotId.offset <= snapshotId.offset &&
+  log.maybeIncrementLogStartOffset(logStartSnapshotId.offset, 
SnapshotGenerated)) =>
+
+  // Delete all segments that have a "last offset" less than the log 
start offset
+  log.deleteOldSegments()
 
-// Delete snapshot after increasing LogStartOffset
-removeSnapshotFilesBefore(logStartSnapshotId)
+  // Forget snapshots less than the log start offset
+  (true, forgetSnapshotsBefore(logStartSnapshotId))
+case _ =>
+  (false, mutable.TreeMap.empty[OffsetAndEpoch, 
Option[FileRawSnapshotReader]])
+  }
+}
 
-true
+removeSnapshots(forgottenSnapshots)
+deleted
+  }
 
-  case _ => false
-}
+  /**
+   * Forget the snapshots earlier than a given snapshot id and return the 
associated
+   * snapshot readers.
+   *
+   * This method assumes that the lock for `snapshots` is ready held.
+   */
+  @nowarn("cat=deprecation") // Needed for TreeMap.until

Review comment:
    thanks for the explanation 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] guozhangwang commented on pull request #10662: KAFKA-12747: Fix flaky test RocksDBStoreTest.shouldReturnUUIDsWithStr…

2021-05-10 Thread GitBox


guozhangwang commented on pull request #10662:
URL: https://github.com/apache/kafka/pull/10662#issuecomment-836978921


   Hey @jlprat sorry I was not aware you're also working on this ticket..


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (KAFKA-12767) Properly set Streams system test runtime classpath

2021-05-10 Thread John Roesler (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17342014#comment-17342014
 ] 

John Roesler commented on KAFKA-12767:
--

Marked "affects version" to 3.0.0, since this problem was first introduced in 
trunk during the 3.0 development cycle. All it means is that we don't want or 
need to try cherry-picking the fixes to older branches.

> Properly set Streams system test runtime classpath
> --
>
> Key: KAFKA-12767
> URL: https://issues.apache.org/jira/browse/KAFKA-12767
> Project: Kafka
>  Issue Type: Task
>  Components: streams, system tests
>Affects Versions: 3.0.0
>Reporter: John Roesler
>Priority: Minor
>  Labels: newbie++
>
> Some of the streams system tests started to fail recently when we stopped 
> exporting our transitive dependencies in the test jar.
> [~lct45] was kind enough to submit 
> [https://github.com/apache/kafka/pull/10631] to get the system tests running 
> again, but that PR is only a stop-gap.
> The real solution is to properly package the transitive dependencies and make 
> them available to the system test runtime.
> Here is the reason: PR#10631 gets past the issue by removing runtime usages 
> on Hamcrest, but Hamcrest is still present in the compiletime classpath. 
> Tomorrow, a new PR could add a reference to Hamcrest, and all the unit and 
> integration tests would pass, but we would again see the mysterious system 
> test failures. Only after another round of costly investigation would we 
> realize the root cause, and we might again decide to just patch the test to 
> remove the reference.
> It would be far better in the long run to fix the underlying condition: the 
> difference between the compiletime and runtime classpaths for the system 
> tests.
>  
> To help get you started, I'll share some of the groundwork for this task, 
> which I put together while trying to understand the nature of the problem.
> The first step is to actually copy the transitive dependencies. We had to 
> stop placing these dependencies in the `dependant-libs` build directory 
> because that results in us actually shipping those dependencies with our 
> releases. Copying a similar mechanism from the `:core` project, we can add a 
> new build directory (arbitrarily: `dependant-testlibs`), and again copy the 
> runtime dependencies there. Here is a commit in my fork that does just that: 
> [https://github.com/vvcephei/kafka/commit/8d4552dee05f2a963b8072b86aae756415ea2482]
> The next step is to place those jars on the classpath of the system test 
> code. The mechanism for that is `kafka-run-class.sh`: 
> [https://github.com/apache/kafka/blob/trunk/bin/kafka-run-class.sh]
> A specific example of this is the special logic for upgrade tests:
>  # If we are running upgrade tests, then we set the artifact directories to 
> the relevant version. Otherwise, we use the current build artifacts. 
> [https://github.com/apache/kafka/blob/fc405d792de12a50956195827eaf57bbf6c9/bin/kafka-run-class.sh#L77-L85]
>  # The, here's where we specifically pull in Hamcrest from those build 
> artifact directories: 
> [https://github.com/apache/kafka/blob/fc405d792de12a50956195827eaf57bbf6c9/bin/kafka-run-class.sh#L128-L136]
> It seems to me that since Hamcrest is actually a more general dependency of 
> the tests, we might as well pack it up in `dependant-testlibs` and then pull 
> it into the classpath from there any time we're running tests. It looks like 
> we ought to set `streams_dependant_clients_lib_dir` to `dependant-testlibs` 
> any time `INCLUDE_TEST_JARS` is `true`. But if we do have 
> `UPGRADE_KAFKA_STREAMS_TEST_VERSION` set, then it should override the lib 
> dir, since those artifacts to copy over the transitive dependencies for those 
> older versions already.
>  
> Although the proposed fix itself is pretty small, I think the testing will 
> take a few days. You might want to just put some `echo` statements in 
> kafka-run-class.sh to see what jars are included on the classpath while you 
> run different tests, both locally using Docker, and remotely using Jenkins.
> I marked this ticket as `newbie++` because you don't need deep knowledge of 
> the codebase to tackle this ticket, just a high pain tolerance for digging 
> though gradle/docker/bash script debugging to make sure the right bits make 
> it into the system tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-12767) Properly set Streams system test runtime classpath

2021-05-10 Thread John Roesler (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Roesler updated KAFKA-12767:
-
Affects Version/s: 3.0.0

> Properly set Streams system test runtime classpath
> --
>
> Key: KAFKA-12767
> URL: https://issues.apache.org/jira/browse/KAFKA-12767
> Project: Kafka
>  Issue Type: Task
>  Components: streams, system tests
>Affects Versions: 3.0.0
>Reporter: John Roesler
>Priority: Minor
>  Labels: newbie++
>
> Some of the streams system tests started to fail recently when we stopped 
> exporting our transitive dependencies in the test jar.
> [~lct45] was kind enough to submit 
> [https://github.com/apache/kafka/pull/10631] to get the system tests running 
> again, but that PR is only a stop-gap.
> The real solution is to properly package the transitive dependencies and make 
> them available to the system test runtime.
> Here is the reason: PR#10631 gets past the issue by removing runtime usages 
> on Hamcrest, but Hamcrest is still present in the compiletime classpath. 
> Tomorrow, a new PR could add a reference to Hamcrest, and all the unit and 
> integration tests would pass, but we would again see the mysterious system 
> test failures. Only after another round of costly investigation would we 
> realize the root cause, and we might again decide to just patch the test to 
> remove the reference.
> It would be far better in the long run to fix the underlying condition: the 
> difference between the compiletime and runtime classpaths for the system 
> tests.
>  
> To help get you started, I'll share some of the groundwork for this task, 
> which I put together while trying to understand the nature of the problem.
> The first step is to actually copy the transitive dependencies. We had to 
> stop placing these dependencies in the `dependant-libs` build directory 
> because that results in us actually shipping those dependencies with our 
> releases. Copying a similar mechanism from the `:core` project, we can add a 
> new build directory (arbitrarily: `dependant-testlibs`), and again copy the 
> runtime dependencies there. Here is a commit in my fork that does just that: 
> [https://github.com/vvcephei/kafka/commit/8d4552dee05f2a963b8072b86aae756415ea2482]
> The next step is to place those jars on the classpath of the system test 
> code. The mechanism for that is `kafka-run-class.sh`: 
> [https://github.com/apache/kafka/blob/trunk/bin/kafka-run-class.sh]
> A specific example of this is the special logic for upgrade tests:
>  # If we are running upgrade tests, then we set the artifact directories to 
> the relevant version. Otherwise, we use the current build artifacts. 
> [https://github.com/apache/kafka/blob/fc405d792de12a50956195827eaf57bbf6c9/bin/kafka-run-class.sh#L77-L85]
>  # The, here's where we specifically pull in Hamcrest from those build 
> artifact directories: 
> [https://github.com/apache/kafka/blob/fc405d792de12a50956195827eaf57bbf6c9/bin/kafka-run-class.sh#L128-L136]
> It seems to me that since Hamcrest is actually a more general dependency of 
> the tests, we might as well pack it up in `dependant-testlibs` and then pull 
> it into the classpath from there any time we're running tests. It looks like 
> we ought to set `streams_dependant_clients_lib_dir` to `dependant-testlibs` 
> any time `INCLUDE_TEST_JARS` is `true`. But if we do have 
> `UPGRADE_KAFKA_STREAMS_TEST_VERSION` set, then it should override the lib 
> dir, since those artifacts to copy over the transitive dependencies for those 
> older versions already.
>  
> Although the proposed fix itself is pretty small, I think the testing will 
> take a few days. You might want to just put some `echo` statements in 
> kafka-run-class.sh to see what jars are included on the classpath while you 
> run different tests, both locally using Docker, and remotely using Jenkins.
> I marked this ticket as `newbie++` because you don't need deep knowledge of 
> the codebase to tackle this ticket, just a high pain tolerance for digging 
> though gradle/docker/bash script debugging to make sure the right bits make 
> it into the system tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] vvcephei commented on pull request #10631: MINOR: Stop using hamcrest in system tests

2021-05-10 Thread GitBox


vvcephei commented on pull request #10631:
URL: https://github.com/apache/kafka/pull/10631#issuecomment-836964248


   Thanks for this fix, @lct45 and @cadonna ! It's great to get these tests 
running again.
   
   FYI, I went ahead and filed 
https://issues.apache.org/jira/browse/KAFKA-12767 for a long-term fix.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (KAFKA-12767) Properly set Streams system test runtime classpath

2021-05-10 Thread John Roesler (Jira)
John Roesler created KAFKA-12767:


 Summary: Properly set Streams system test runtime classpath
 Key: KAFKA-12767
 URL: https://issues.apache.org/jira/browse/KAFKA-12767
 Project: Kafka
  Issue Type: Task
  Components: streams, system tests
Reporter: John Roesler


Some of the streams system tests started to fail recently when we stopped 
exporting our transitive dependencies in the test jar.

[~lct45] was kind enough to submit [https://github.com/apache/kafka/pull/10631] 
to get the system tests running again, but that PR is only a stop-gap.

The real solution is to properly package the transitive dependencies and make 
them available to the system test runtime.

Here is the reason: PR#10631 gets past the issue by removing runtime usages on 
Hamcrest, but Hamcrest is still present in the compiletime classpath. Tomorrow, 
a new PR could add a reference to Hamcrest, and all the unit and integration 
tests would pass, but we would again see the mysterious system test failures. 
Only after another round of costly investigation would we realize the root 
cause, and we might again decide to just patch the test to remove the reference.

It would be far better in the long run to fix the underlying condition: the 
difference between the compiletime and runtime classpaths for the system tests.

 

To help get you started, I'll share some of the groundwork for this task, which 
I put together while trying to understand the nature of the problem.

The first step is to actually copy the transitive dependencies. We had to stop 
placing these dependencies in the `dependant-libs` build directory because that 
results in us actually shipping those dependencies with our releases. Copying a 
similar mechanism from the `:core` project, we can add a new build directory 
(arbitrarily: `dependant-testlibs`), and again copy the runtime dependencies 
there. Here is a commit in my fork that does just that: 
[https://github.com/vvcephei/kafka/commit/8d4552dee05f2a963b8072b86aae756415ea2482]

The next step is to place those jars on the classpath of the system test code. 
The mechanism for that is `kafka-run-class.sh`: 
[https://github.com/apache/kafka/blob/trunk/bin/kafka-run-class.sh]

A specific example of this is the special logic for upgrade tests:
 # If we are running upgrade tests, then we set the artifact directories to the 
relevant version. Otherwise, we use the current build artifacts. 
[https://github.com/apache/kafka/blob/fc405d792de12a50956195827eaf57bbf6c9/bin/kafka-run-class.sh#L77-L85]
 # The, here's where we specifically pull in Hamcrest from those build artifact 
directories: 
[https://github.com/apache/kafka/blob/fc405d792de12a50956195827eaf57bbf6c9/bin/kafka-run-class.sh#L128-L136]

It seems to me that since Hamcrest is actually a more general dependency of the 
tests, we might as well pack it up in `dependant-testlibs` and then pull it 
into the classpath from there any time we're running tests. It looks like we 
ought to set `streams_dependant_clients_lib_dir` to `dependant-testlibs` any 
time `INCLUDE_TEST_JARS` is `true`. But if we do have 
`UPGRADE_KAFKA_STREAMS_TEST_VERSION` set, then it should override the lib dir, 
since those artifacts to copy over the transitive dependencies for those older 
versions already.

 

Although the proposed fix itself is pretty small, I think the testing will take 
a few days. You might want to just put some `echo` statements in 
kafka-run-class.sh to see what jars are included on the classpath while you run 
different tests, both locally using Docker, and remotely using Jenkins.

I marked this ticket as `newbie++` because you don't need deep knowledge of the 
codebase to tackle this ticket, just a high pain tolerance for digging though 
gradle/docker/bash script debugging to make sure the right bits make it into 
the system tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] C0urante commented on pull request #10659: MINOR: remove unnecessary placeholder from WorkerSourceTask#recordSent

2021-05-10 Thread GitBox


C0urante commented on pull request #10659:
URL: https://github.com/apache/kafka/pull/10659#issuecomment-836857466


   Blegh, sorry guys. Thanks for catching this!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (KAFKA-12757) Move server related common and public classes into separate module(s).

2021-05-10 Thread Satish Duggana (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Satish Duggana updated KAFKA-12757:
---
Description: 
There are two sets of classes that we want to pull out here for server 
common/api classes.

1. All the common classes used by server modules.
 2. All the public server side classes exposed to users. Some of these classes 
are storage-api, security, quotas etc.

Couple of approaches that we can consider here:

1. Both sets of classes will be added in server-common module but common 
classes will have a package prefix as org.apache.kafka.server.common. But 
public classes will continue to have the existing package names. We will 
generate javadocs for these public classes.
 Pros
 - Users and server modules will be depdent on a single module for both common 
and public apis.
 - Both common classes and api classes can be dependent on each other.

Cons
 - Not a clean separation between common classes and public classes.

2. Common classes used by server modules will be added in server-common module. 
We will create another module called server-api for server side public classes.

Pros
 - It gives a neat separation between common and public classes.
 - Maintaining the growth of these modules will be easy.

Cons
 - We can not have common and api classes to be dependent on each other which 
will cause circular dependency.

 

Please feel free to modify/add other approaches in modularizing these classes. 

 

  was:
There are two sets of classes that we want to pull out here for server 
common/api classes.

1. All the common classes used by server modules.
 2. All the public server side classes exposed to users. Some of these classes 
are storage-api, security, quotas etc.

Couple of approaches that we can consider here:

1. Both sets of classes will be added in server-common module but common 
classes will have a package prefix as org.apache.kafka.server.common. But 
public classes will continue to have the existing package names. We will 
generate javadocs for these public classes.
 Pros
 - Users and server modules will be depdent on a single module for both common 
and public apis.
 - Both common classes and api classes can be dependent on each other.

Cons
 - Not a clean separation between common classes and public classes.

2. Common classes used by server modules will be added in server-common module. 
We will create another module called server-api for server side public classes.

Pros
 - It gives a neat separation between common and public classes.
 - Maintaining the growth of these modules will be easy.

Cons
 - We can not have common and api classes to be dependent on each other which 
will cause circular dependency.

 

Please feel free to modify if you want to discuss other approaches in 
modularizing these classes. 

 


> Move server related common and public classes into separate module(s).
> --
>
> Key: KAFKA-12757
> URL: https://issues.apache.org/jira/browse/KAFKA-12757
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Satish Duggana
>Priority: Major
>
> There are two sets of classes that we want to pull out here for server 
> common/api classes.
> 1. All the common classes used by server modules.
>  2. All the public server side classes exposed to users. Some of these 
> classes are storage-api, security, quotas etc.
> Couple of approaches that we can consider here:
> 1. Both sets of classes will be added in server-common module but common 
> classes will have a package prefix as org.apache.kafka.server.common. But 
> public classes will continue to have the existing package names. We will 
> generate javadocs for these public classes.
>  Pros
>  - Users and server modules will be depdent on a single module for both 
> common and public apis.
>  - Both common classes and api classes can be dependent on each other.
> Cons
>  - Not a clean separation between common classes and public classes.
> 2. Common classes used by server modules will be added in server-common 
> module. We will create another module called server-api for server side 
> public classes.
> Pros
>  - It gives a neat separation between common and public classes.
>  - Maintaining the growth of these modules will be easy.
> Cons
>  - We can not have common and api classes to be dependent on each other which 
> will cause circular dependency.
>  
> Please feel free to modify/add other approaches in modularizing these 
> classes. 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] dejan2609 edited a comment on pull request #10658: POC for CheckStyle 8.42 regression (with 'Unnecessary Parentheses' errors)

2021-05-10 Thread GitBox


dejan2609 edited a comment on pull request #10658:
URL: https://github.com/apache/kafka/pull/10658#issuecomment-836815768


   Rebased onto trunk and force-pushed (as explained in a conversation/review 
above).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] dejan2609 commented on pull request #10658: POC for CheckStyle 8.42 regression (with 'Unnecessary Parentheses' errors)

2021-05-10 Thread GitBox


dejan2609 commented on pull request #10658:
URL: https://github.com/apache/kafka/pull/10658#issuecomment-836815768


   Rebased onto trunk and force-pushed (as explained in a conversation above).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] dejan2609 commented on a change in pull request #10658: POC for CheckStyle 8.42 regression (with 'Unnecessary Parentheses' errors)

2021-05-10 Thread GitBox


dejan2609 commented on a change in pull request #10658:
URL: https://github.com/apache/kafka/pull/10658#discussion_r629430530



##
File path: checkstyle/checkstyle.xml
##
@@ -91,7 +91,7 @@
   
   
 
-
+

Review comment:
   Roger that @romani. 
   
   Ok, I will revoke this module exclusion than (and then you will be able 
receive **_all_** Kafka's CheckStyle 8.42 violations). 
   
   I will also rebase this PR onto trunk (CheckStyle upgrade from 8.20 to 
8.36.2 was merged to trunk today).




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] mimaison commented on a change in pull request #10660: MINOR: Updating files with release 2.7.1

2021-05-10 Thread GitBox


mimaison commented on a change in pull request #10660:
URL: https://github.com/apache/kafka/pull/10660#discussion_r629427422



##
File path: tests/docker/Dockerfile
##
@@ -61,7 +61,7 @@ RUN mkdir -p "/opt/kafka-2.3.1" && chmod a+rw 
/opt/kafka-2.3.1 && curl -s "$KAFK
 RUN mkdir -p "/opt/kafka-2.4.1" && chmod a+rw /opt/kafka-2.4.1 && curl -s 
"$KAFKA_MIRROR/kafka_2.12-2.4.1.tgz" | tar xz --strip-components=1 -C 
"/opt/kafka-2.4.1"
 RUN mkdir -p "/opt/kafka-2.5.1" && chmod a+rw /opt/kafka-2.5.1 && curl -s 
"$KAFKA_MIRROR/kafka_2.12-2.5.1.tgz" | tar xz --strip-components=1 -C 
"/opt/kafka-2.5.1"
 RUN mkdir -p "/opt/kafka-2.6.2" && chmod a+rw /opt/kafka-2.6.2 && curl -s 
"$KAFKA_MIRROR/kafka_2.12-2.6.2.tgz" | tar xz --strip-components=1 -C 
"/opt/kafka-2.6.2"
-RUN mkdir -p "/opt/kafka-2.7.0" && chmod a+rw /opt/kafka-2.7.0 && curl -s 
"$KAFKA_MIRROR/kafka_2.12-2.7.0.tgz" | tar xz --strip-components=1 -C 
"/opt/kafka-2.7.0"
+RUN mkdir -p "/opt/kafka-2.7.1" && chmod a+rw /opt/kafka-2.7.1 && curl -s 
"$KAFKA_MIRROR/kafka_2.12-2.7.1.tgz" | tar xz --strip-components=1 -C 
"/opt/kafka-2.7.1"

Review comment:
   See my comment in 
https://github.com/apache/kafka/pull/10660#issue-637378228




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] jlprat commented on a change in pull request #10643: KAFKA-12747: Fix flakiness in shouldReturnUUIDsWithStringPrefix

2021-05-10 Thread GitBox


jlprat commented on a change in pull request #10643:
URL: https://github.com/apache/kafka/pull/10643#discussion_r629341305



##
File path: 
streams/src/test/java/org/apache/kafka/streams/state/internals/RocksDBStoreTest.java
##
@@ -451,17 +453,14 @@ public void shouldReturnUUIDsWithStringPrefix() {
 rocksDBStore.flush();
 
 final KeyValueIterator keysWithPrefix = 
rocksDBStore.prefixScan(prefix, stringSerializer);
-final List valuesWithPrefix = new ArrayList<>();
 int numberOfKeysReturned = 0;
 
 while (keysWithPrefix.hasNext()) {
-final KeyValue next = keysWithPrefix.next();
-valuesWithPrefix.add(new String(next.value));
+keysWithPrefix.next();
 numberOfKeysReturned++;
 }
 
-assertThat(numberOfKeysReturned, is(1));
-assertThat(valuesWithPrefix.get(0), is("a"));

Review comment:
   I would still try to keep this assertion, something among the lines of:
   ```java
   if (clashOnPrefix) {
   assertThat(valuesWithPrefix.get(0), either(is("a")).or(is("b")));
   } else {
   assertThat(valuesWithPrefix.get(0), is("a"));
   }
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] showuon commented on pull request #10552: KAFKA-12675: improve the sticky general assignor scalability and performance

2021-05-10 Thread GitBox


showuon commented on pull request #10552:
URL: https://github.com/apache/kafka/pull/10552#issuecomment-836655715


   I'll have some refine to this PR. Please wait for a while . Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] jlprat commented on pull request #10643: KAFKA-12747: Fix flakiness in shouldReturnUUIDsWithStringPrefix

2021-05-10 Thread GitBox


jlprat commented on pull request #10643:
URL: https://github.com/apache/kafka/pull/10643#issuecomment-836631952


   You can also take a look at 
https://github.com/apache/kafka/pull/10662/files#diff-944ff4a8c4eab2003ab210f5a2fceac1d124e71fcfe383b736add11803a4fd4cR468
 (in there I was checking that in case of prefix clash, the content was either 
"a" or "b"


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] jlprat closed pull request #10662: KAFKA-12747: Fix flaky test RocksDBStoreTest.shouldReturnUUIDsWithStr…

2021-05-10 Thread GitBox


jlprat closed pull request #10662:
URL: https://github.com/apache/kafka/pull/10662


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] jlprat commented on pull request #10662: KAFKA-12747: Fix flaky test RocksDBStoreTest.shouldReturnUUIDsWithStr…

2021-05-10 Thread GitBox


jlprat commented on pull request #10662:
URL: https://github.com/apache/kafka/pull/10662#issuecomment-836630173


   Now I see another PR was already there: 
https://github.com/apache/kafka/pull/10643


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (KAFKA-12747) Flaky Test RocksDBStoreTest.shouldReturnUUIDsWithStringPrefix

2021-05-10 Thread Josep Prat (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17341872#comment-17341872
 ] 

Josep Prat commented on KAFKA-12747:


There was already another PR

> Flaky Test RocksDBStoreTest.shouldReturnUUIDsWithStringPrefix
> -
>
> Key: KAFKA-12747
> URL: https://issues.apache.org/jira/browse/KAFKA-12747
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: A. Sophie Blee-Goldman
>Priority: Major
>  Labels: flaky-test, newbie, newbie++, unit-test
>
> Stacktrace
> java.lang.AssertionError: 
> Expected: is <1>
>  but: was <2>
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:6)
>   at 
> org.apache.kafka.streams.state.internals.RocksDBStoreTest.shouldReturnUUIDsWithStringPrefix(RocksDBStoreTest.java:463)
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-10597/10/tests/



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-12747) Flaky Test RocksDBStoreTest.shouldReturnUUIDsWithStringPrefix

2021-05-10 Thread Josep Prat (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josep Prat reassigned KAFKA-12747:
--

Assignee: (was: Josep Prat)

> Flaky Test RocksDBStoreTest.shouldReturnUUIDsWithStringPrefix
> -
>
> Key: KAFKA-12747
> URL: https://issues.apache.org/jira/browse/KAFKA-12747
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: A. Sophie Blee-Goldman
>Priority: Major
>  Labels: flaky-test, newbie, newbie++, unit-test
>
> Stacktrace
> java.lang.AssertionError: 
> Expected: is <1>
>  but: was <2>
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:6)
>   at 
> org.apache.kafka.streams.state.internals.RocksDBStoreTest.shouldReturnUUIDsWithStringPrefix(RocksDBStoreTest.java:463)
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-10597/10/tests/



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] jlprat opened a new pull request #10662: KAFKA-12747: Fix flaky test RocksDBStoreTest.shouldReturnUUIDsWithStr…

2021-05-10 Thread GitBox


jlprat opened a new pull request #10662:
URL: https://github.com/apache/kafka/pull/10662


   …ingPrefix
   
   Fixes the test by checking if there is a prefix collision.
   Following comment on [JIRA 
ticket](https://issues.apache.org/jira/browse/KAFKA-12747?focusedCommentId=17338612=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17338612),
 in case of prefix clash, we check that the returned value can be either of 
both (a or b).
   
   Alternatively, we could re-generate the second UUID in case of prefix
   clash until they are clash-free.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] chia7712 commented on a change in pull request #10660: MINOR: Updating files with release 2.7.1

2021-05-10 Thread GitBox


chia7712 commented on a change in pull request #10660:
URL: https://github.com/apache/kafka/pull/10660#discussion_r629292359



##
File path: tests/docker/Dockerfile
##
@@ -61,7 +61,7 @@ RUN mkdir -p "/opt/kafka-2.3.1" && chmod a+rw 
/opt/kafka-2.3.1 && curl -s "$KAFK
 RUN mkdir -p "/opt/kafka-2.4.1" && chmod a+rw /opt/kafka-2.4.1 && curl -s 
"$KAFKA_MIRROR/kafka_2.12-2.4.1.tgz" | tar xz --strip-components=1 -C 
"/opt/kafka-2.4.1"
 RUN mkdir -p "/opt/kafka-2.5.1" && chmod a+rw /opt/kafka-2.5.1 && curl -s 
"$KAFKA_MIRROR/kafka_2.12-2.5.1.tgz" | tar xz --strip-components=1 -C 
"/opt/kafka-2.5.1"
 RUN mkdir -p "/opt/kafka-2.6.2" && chmod a+rw /opt/kafka-2.6.2 && curl -s 
"$KAFKA_MIRROR/kafka_2.12-2.6.2.tgz" | tar xz --strip-components=1 -C 
"/opt/kafka-2.6.2"
-RUN mkdir -p "/opt/kafka-2.7.0" && chmod a+rw /opt/kafka-2.7.0 && curl -s 
"$KAFKA_MIRROR/kafka_2.12-2.7.0.tgz" | tar xz --strip-components=1 -C 
"/opt/kafka-2.7.0"
+RUN mkdir -p "/opt/kafka-2.7.1" && chmod a+rw /opt/kafka-2.7.1 && curl -s 
"$KAFKA_MIRROR/kafka_2.12-2.7.1.tgz" | tar xz --strip-components=1 -C 
"/opt/kafka-2.7.1"

Review comment:
   the link 
(https://s3-us-west-2.amazonaws.com/kafka-packages/kafka_2.12-2.7.1.tgz) is not 
available now. Could you fix it?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] satishd edited a comment on pull request #10638: KAFKA-12758 Added `server-common` module to have server side common classes.

2021-05-10 Thread GitBox


satishd edited a comment on pull request #10638:
URL: https://github.com/apache/kafka/pull/10638#issuecomment-834307680


   Thanks @junrao for the review comments. Pl see inline replies and the latest 
commits.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (KAFKA-12747) Flaky Test RocksDBStoreTest.shouldReturnUUIDsWithStringPrefix

2021-05-10 Thread Josep Prat (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josep Prat reassigned KAFKA-12747:
--

Assignee: Josep Prat

> Flaky Test RocksDBStoreTest.shouldReturnUUIDsWithStringPrefix
> -
>
> Key: KAFKA-12747
> URL: https://issues.apache.org/jira/browse/KAFKA-12747
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: A. Sophie Blee-Goldman
>Assignee: Josep Prat
>Priority: Major
>  Labels: flaky-test, newbie, newbie++, unit-test
>
> Stacktrace
> java.lang.AssertionError: 
> Expected: is <1>
>  but: was <2>
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:6)
>   at 
> org.apache.kafka.streams.state.internals.RocksDBStoreTest.shouldReturnUUIDsWithStringPrefix(RocksDBStoreTest.java:463)
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-10597/10/tests/



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12747) Flaky Test RocksDBStoreTest.shouldReturnUUIDsWithStringPrefix

2021-05-10 Thread Josep Prat (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17341834#comment-17341834
 ] 

Josep Prat commented on KAFKA-12747:


I'd like to try to solve this one.

> Flaky Test RocksDBStoreTest.shouldReturnUUIDsWithStringPrefix
> -
>
> Key: KAFKA-12747
> URL: https://issues.apache.org/jira/browse/KAFKA-12747
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: A. Sophie Blee-Goldman
>Assignee: Josep Prat
>Priority: Major
>  Labels: flaky-test, newbie, newbie++, unit-test
>
> Stacktrace
> java.lang.AssertionError: 
> Expected: is <1>
>  but: was <2>
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:6)
>   at 
> org.apache.kafka.streams.state.internals.RocksDBStoreTest.shouldReturnUUIDsWithStringPrefix(RocksDBStoreTest.java:463)
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-10597/10/tests/



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] satishd commented on a change in pull request #10638: KAFKA-12758 Added `server-common` module to have server side common classes.

2021-05-10 Thread GitBox


satishd commented on a change in pull request #10638:
URL: https://github.com/apache/kafka/pull/10638#discussion_r629269557



##
File path: build.gradle
##
@@ -1345,6 +1349,62 @@ project(':raft') {
   }
 }
 
+project(':server-common') {
+  archivesBaseName = "kafka-server-common"

Review comment:
   Created https://issues.apache.org/jira/browse/KAFKA-12757 to discuss 
more on this. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (KAFKA-12757) Move server related common and public classes into separate module(s).

2021-05-10 Thread Satish Duggana (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Satish Duggana updated KAFKA-12757:
---
Description: 
There are two sets of classes that we want to pull out here for server 
common/api classes.

1. All the common classes used by server modules.
 2. All the public server side classes exposed to users. Some of these classes 
are storage-api, security, quotas etc.

Couple of approaches that we can consider here:

1. Both sets of classes will be added in server-common module but common 
classes will have a package prefix as org.apache.kafka.server.common. But 
public classes will continue to have the existing package names. We will 
generate javadocs for these public classes.
 Pros
 - Users and server modules will be depdent on a single module for both common 
and public apis.
 - Both common classes and api classes can be dependent on each other.

Cons
 - Not a clean separation between common classes and public classes.

2. Common classes used by server modules will be added in server-common module. 
We will create another module called server-api for server side public classes.

Pros
 - It gives a neat separation between common and public classes.
 - Maintaining the growth of these modules will be easy.

Cons
 - We can not have common and api classes to be dependent on each other which 
will cause circular dependency.

 

Please feel free to modify if you want to discuss other approaches in 
modularizing these classes. 

 

  was:
There are two sets of classes that we want to pull out here for server 
common/api classes.

1. All the common classes used by server modules.
 2. All the public server side classes exposed to users. Some of these classes 
are storage-api, security, quotas etc.

Couple of approaches that we can consider here:

1. Both sets of classes will be added in server-common module but common 
classes will have a package prefix as org.apache.kafka.server.common. But 
public classes will continue to have the existing package names. We will 
generate javadocs for these public classes.
 Pros
 - Users and server modules will be depdent on a single module for both common 
and public apis.
 - Both common classes and api classes can be dependent on each other.

Cons
 - Not a clean separation between common classes and public classes.

2. Common classes used by server modules will be added in server-common module. 
We will create another module called server-api for server side public classes.

Pros
 - It gives a neat separation between common and public classes.
 - Maintaining the growth of these modules will be easy.

Cons
 - We can not have common and api classes to be dependent on each other which 
will cause circular dependency.

Pl feel free to add if you want to discuss other approaches in modularizing 
these classes. 

 


> Move server related common and public classes into separate module(s).
> --
>
> Key: KAFKA-12757
> URL: https://issues.apache.org/jira/browse/KAFKA-12757
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Satish Duggana
>Priority: Major
>
> There are two sets of classes that we want to pull out here for server 
> common/api classes.
> 1. All the common classes used by server modules.
>  2. All the public server side classes exposed to users. Some of these 
> classes are storage-api, security, quotas etc.
> Couple of approaches that we can consider here:
> 1. Both sets of classes will be added in server-common module but common 
> classes will have a package prefix as org.apache.kafka.server.common. But 
> public classes will continue to have the existing package names. We will 
> generate javadocs for these public classes.
>  Pros
>  - Users and server modules will be depdent on a single module for both 
> common and public apis.
>  - Both common classes and api classes can be dependent on each other.
> Cons
>  - Not a clean separation between common classes and public classes.
> 2. Common classes used by server modules will be added in server-common 
> module. We will create another module called server-api for server side 
> public classes.
> Pros
>  - It gives a neat separation between common and public classes.
>  - Maintaining the growth of these modules will be easy.
> Cons
>  - We can not have common and api classes to be dependent on each other which 
> will cause circular dependency.
>  
> Please feel free to modify if you want to discuss other approaches in 
> modularizing these classes. 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] chia7712 opened a new pull request #10661: MINOR: upgrade pip from 20.2.2 to 21.1.1

2021-05-10 Thread GitBox


chia7712 opened a new pull request #10661:
URL: https://github.com/apache/kafka/pull/10661


   The following error happens on my mac m1 when building docker image for 
system tests.
   
   ```
   Collecting pynacl
 Using cached PyNaCl-1.4.0.tar.gz (3.4 MB)
 Installing build dependencies ... error
 ERROR: Command errored out with exit status 1:
  command: /usr/bin/python3 /usr/local/lib/python3.8/dist-packages/pip 
install --ignore-installed --no-user --prefix 
/tmp/pip-build-env-k867aac0/overlay --no-warn-script-location --no-binary 
:none: --only-binary :none: -i https://pypi.org/simple -- 'setuptools>=40.8.0' 
wheel 'cffi>=1.4.1; python_implementation != '"'"'PyPy'"'"''
  cwd: None
 Complete output (14 lines):
 Traceback (most recent call last):
   File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
 return _run_code(code, main_globals, None,
   File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
 exec(code, run_globals)
   File "/usr/local/lib/python3.8/dist-packages/pip/__main__.py", line 23, 
in 
 from pip._internal.cli.main import main as _main  # isort:skip # noqa
   File "/usr/local/lib/python3.8/dist-packages/pip/_internal/cli/main.py", 
line 5, in 
 import locale
   File "/usr/lib/python3.8/locale.py", line 16, in 
 import re
   File "/usr/lib/python3.8/re.py", line 145, in 
 class RegexFlag(enum.IntFlag):
 AttributeError: module 'enum' has no attribute 'IntFlag'
 
   ERROR: Command errored out with exit status 1: /usr/bin/python3 
/usr/local/lib/python3.8/dist-packages/pip install --ignore-installed --no-user 
--prefix /tmp/pip-build-env-k867aac0/overlay --no-warn-script-location 
--no-binary :none: --only-binary :none: -i https://pypi.org/simple -- 
'setuptools>=40.8.0' wheel 'cffi>=1.4.1; python_implementation != 
'"'"'PyPy'"'"'' Check the logs for full command output.
   ```
   
   There was a related issue: https://github.com/pypa/pip/pull/9689 and it is 
already fixed by https://github.com/pypa/pip/pull/9689 (included by pip 
21.1.1). I test the pip 21.1.1 and it works well on mac m1.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (KAFKA-12757) Move server related common and public classes into separate module(s).

2021-05-10 Thread Satish Duggana (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17341829#comment-17341829
 ] 

Satish Duggana commented on KAFKA-12757:


cc [~junrao] [~ijuma]

> Move server related common and public classes into separate module(s).
> --
>
> Key: KAFKA-12757
> URL: https://issues.apache.org/jira/browse/KAFKA-12757
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Satish Duggana
>Priority: Major
>
> There are two sets of classes that we want to pull out here for server 
> common/api classes.
> 1. All the common classes used by server modules.
>  2. All the public server side classes exposed to users. Some of these 
> classes are storage-api, security, quotas etc.
> Couple of approaches that we can consider here:
> 1. Both sets of classes will be added in server-common module but common 
> classes will have a package prefix as org.apache.kafka.server.common. But 
> public classes will continue to have the existing package names. We will 
> generate javadocs for these public classes.
>  Pros
>  - Users and server modules will be depdent on a single module for both 
> common and public apis.
>  - Both common classes and api classes can be dependent on each other.
> Cons
>  - Not a clean separation between common classes and public classes.
> 2. Common classes used by server modules will be added in server-common 
> module. We will create another module called server-api for server side 
> public classes.
> Pros
>  - It gives a neat separation between common and public classes.
>  - Maintaining the growth of these modules will be easy.
> Cons
>  - We can not have common and api classes to be dependent on each other which 
> will cause circular dependency.
> Pl feel free to add if you want to discuss other approaches in modularizing 
> these classes. 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-12757) Move server related common and public classes into separate module(s).

2021-05-10 Thread Satish Duggana (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Satish Duggana updated KAFKA-12757:
---
Description: 
There are two sets of classes that we want to pull out here for server 
common/api classes.

1. All the common classes used by server modules.
2. All the public server side classes exposed to users. Some of these classes 
are storage-api, security, quotas etc.

Couple of approaches that we can consider here:

1. Both sets of classes will be added in server-common module but common 
classes will have a package prefix as org.apache.kafka.server.common. But 
public classes will continue to have the existing package names. We will 
generate javadocs for these public classes.
Pros
 - Users and server modules will be depdent on a single module for both common 
and public apis. 
 - Both common classes and api classes can be dependent on each other.

Cons
 - Not a clean separation between common classes and public classes.

2. Common classes used by server modules will be added in server-common module. 
We will create another module called server-api for server side public classes.

Pros
 - It gives a neat separation between common and public classes.
 - Maintaining the growth of these modules will be easy.

Cons
 - We can not have common and api classes to be dependent on each other which 
will cause circular depdency.

Pl feel free to add if you want to discuss other approaches in modularizing 
these classes. 

 

  was:
There are two sets of classes that we want to pull out here for server 
common/api classes.

1. All the common classes used by server modules.
2. All the public server side classes exposed to users. Some of these classes 
are storage-api, security, quotas etc.


Couple of approaches that we can consider here:

1. Both sets of classes will be added in server-common module but common 
classes will have a package prefix as org.apache.kafka.server.common. But 
public classes will continue to have the existing package names. We will 
generate javadocs for these public classes.

2. Common classes used by server modules will be added in server-common module. 
We will create another module called server-api for server side public classes.

Pl feel free to add if you want to discuss other approaches in modularizing 
these classes. 

 


> Move server related common and public classes into separate module(s).
> --
>
> Key: KAFKA-12757
> URL: https://issues.apache.org/jira/browse/KAFKA-12757
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Satish Duggana
>Priority: Major
>
> There are two sets of classes that we want to pull out here for server 
> common/api classes.
> 1. All the common classes used by server modules.
> 2. All the public server side classes exposed to users. Some of these classes 
> are storage-api, security, quotas etc.
> Couple of approaches that we can consider here:
> 1. Both sets of classes will be added in server-common module but common 
> classes will have a package prefix as org.apache.kafka.server.common. But 
> public classes will continue to have the existing package names. We will 
> generate javadocs for these public classes.
> Pros
>  - Users and server modules will be depdent on a single module for both 
> common and public apis. 
>  - Both common classes and api classes can be dependent on each other.
> Cons
>  - Not a clean separation between common classes and public classes.
> 2. Common classes used by server modules will be added in server-common 
> module. We will create another module called server-api for server side 
> public classes.
> Pros
>  - It gives a neat separation between common and public classes.
>  - Maintaining the growth of these modules will be easy.
> Cons
>  - We can not have common and api classes to be dependent on each other which 
> will cause circular depdency.
> Pl feel free to add if you want to discuss other approaches in modularizing 
> these classes. 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-12757) Move server related common and public classes into separate module(s).

2021-05-10 Thread Satish Duggana (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Satish Duggana updated KAFKA-12757:
---
Description: 
There are two sets of classes that we want to pull out here for server 
common/api classes.

1. All the common classes used by server modules.
 2. All the public server side classes exposed to users. Some of these classes 
are storage-api, security, quotas etc.

Couple of approaches that we can consider here:

1. Both sets of classes will be added in server-common module but common 
classes will have a package prefix as org.apache.kafka.server.common. But 
public classes will continue to have the existing package names. We will 
generate javadocs for these public classes.
 Pros
 - Users and server modules will be depdent on a single module for both common 
and public apis.
 - Both common classes and api classes can be dependent on each other.

Cons
 - Not a clean separation between common classes and public classes.

2. Common classes used by server modules will be added in server-common module. 
We will create another module called server-api for server side public classes.

Pros
 - It gives a neat separation between common and public classes.
 - Maintaining the growth of these modules will be easy.

Cons
 - We can not have common and api classes to be dependent on each other which 
will cause circular dependency.

Pl feel free to add if you want to discuss other approaches in modularizing 
these classes. 

 

  was:
There are two sets of classes that we want to pull out here for server 
common/api classes.

1. All the common classes used by server modules.
2. All the public server side classes exposed to users. Some of these classes 
are storage-api, security, quotas etc.

Couple of approaches that we can consider here:

1. Both sets of classes will be added in server-common module but common 
classes will have a package prefix as org.apache.kafka.server.common. But 
public classes will continue to have the existing package names. We will 
generate javadocs for these public classes.
Pros
 - Users and server modules will be depdent on a single module for both common 
and public apis. 
 - Both common classes and api classes can be dependent on each other.

Cons
 - Not a clean separation between common classes and public classes.

2. Common classes used by server modules will be added in server-common module. 
We will create another module called server-api for server side public classes.

Pros
 - It gives a neat separation between common and public classes.
 - Maintaining the growth of these modules will be easy.

Cons
 - We can not have common and api classes to be dependent on each other which 
will cause circular depdency.

Pl feel free to add if you want to discuss other approaches in modularizing 
these classes. 

 


> Move server related common and public classes into separate module(s).
> --
>
> Key: KAFKA-12757
> URL: https://issues.apache.org/jira/browse/KAFKA-12757
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Satish Duggana
>Priority: Major
>
> There are two sets of classes that we want to pull out here for server 
> common/api classes.
> 1. All the common classes used by server modules.
>  2. All the public server side classes exposed to users. Some of these 
> classes are storage-api, security, quotas etc.
> Couple of approaches that we can consider here:
> 1. Both sets of classes will be added in server-common module but common 
> classes will have a package prefix as org.apache.kafka.server.common. But 
> public classes will continue to have the existing package names. We will 
> generate javadocs for these public classes.
>  Pros
>  - Users and server modules will be depdent on a single module for both 
> common and public apis.
>  - Both common classes and api classes can be dependent on each other.
> Cons
>  - Not a clean separation between common classes and public classes.
> 2. Common classes used by server modules will be added in server-common 
> module. We will create another module called server-api for server side 
> public classes.
> Pros
>  - It gives a neat separation between common and public classes.
>  - Maintaining the growth of these modules will be easy.
> Cons
>  - We can not have common and api classes to be dependent on each other which 
> will cause circular dependency.
> Pl feel free to add if you want to discuss other approaches in modularizing 
> these classes. 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   >