[jira] [Commented] (KAFKA-12393) Document multi-tenancy considerations

2021-03-04 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17295077#comment-17295077
 ] 

ASF GitHub Bot commented on KAFKA-12393:


miguno commented on pull request #334:
URL: https://github.com/apache/kafka-site/pull/334#issuecomment-790412618


   Thanks all for reviewing, much appreciated!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Document multi-tenancy considerations
> -
>
> Key: KAFKA-12393
> URL: https://issues.apache.org/jira/browse/KAFKA-12393
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation
>Reporter: Michael G. Noll
>Assignee: Michael G. Noll
>Priority: Minor
>
> We should provide an overview of multi-tenancy consideration (e.g., user 
> spaces, security) as the current documentation lacks such information.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] priyavj08 commented on pull request #7898: KAFKA-9366: Change log4j dependency into log4j2

2021-03-04 Thread GitBox


priyavj08 commented on pull request #7898:
URL: https://github.com/apache/kafka/pull/7898#issuecomment-790427075


   @dongjinleekr when will this fix make it in to one of Kafka upstream release?
   
   thanks



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] sofarsoghood commented on pull request #7898: KAFKA-9366: Change log4j dependency into log4j2

2021-03-04 Thread GitBox


sofarsoghood commented on pull request #7898:
URL: https://github.com/apache/kafka/pull/7898#issuecomment-790429327


   > @dongjinleekr really appreciate your guidance here. thanks for the patch.
   > 
   > If I chose to not to move to this patch right away, can you please confirm 
that this vulnerability in log4j 
([CVE-2019-17571](https://github.com/advisories/GHSA-2qrg-x229-3v8q)) doesn't 
affect Kafka?
   > 
   > thanks
   
   @priyavj08 we now checked Kafka's source code for any appearances of the 
SocketServer class or corresponding config files but were not able to find any. 
Furthermore we took a closer look at the listening ports inside the running 
containers. 
   
   Conclusion: it looks like the affected SocketServer class is not used by 
Kafka.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] showuon commented on a change in pull request #10257: KAFKA-12407: Document omitted Controller Health Metrics

2021-03-04 Thread GitBox


showuon commented on a change in pull request #10257:
URL: https://github.com/apache/kafka/pull/10257#discussion_r587260885



##
File path: docs/ops.html
##
@@ -1271,6 +1271,24 @@ http://kafka.apache.org/documentation/#brokerconfigs_broker.id





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] dongjinleekr commented on a change in pull request #10257: KAFKA-12407: Document omitted Controller Health Metrics

2021-03-04 Thread GitBox


dongjinleekr commented on a change in pull request #10257:
URL: https://github.com/apache/kafka/pull/10257#discussion_r587300188



##
File path: docs/ops.html
##
@@ -1271,6 +1271,24 @@ 

[GitHub] [kafka] dajac merged pull request #10234: MINOR; Clean up LeaderAndIsrResponse construction in `ReplicaManager#becomeLeaderOrFollower`

2021-03-04 Thread GitBox


dajac merged pull request #10234:
URL: https://github.com/apache/kafka/pull/10234


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] dongjinleekr commented on pull request #7898: KAFKA-9366: Change log4j dependency into log4j2

2021-03-04 Thread GitBox


dongjinleekr commented on pull request #7898:
URL: https://github.com/apache/kafka/pull/7898#issuecomment-790473929


   @priyavj08 Since this KIP is already passed, it will be included in 2.8.0 
release.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] chia7712 opened a new pull request #10261: MINOR: using INFO level to log 'no meta.properties' for broker server

2021-03-04 Thread GitBox


chia7712 opened a new pull request #10261:
URL: https://github.com/apache/kafka/pull/10261


   That is expected behavior for broker server so it seems to me the "warn" 
level is a bit overkill.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] chia7712 commented on pull request #9758: MINOR: remove FetchResponse.AbortedTransaction and redundant construc…

2021-03-04 Thread GitBox


chia7712 commented on pull request #9758:
URL: https://github.com/apache/kafka/pull/9758#issuecomment-790491007


   ```
   Build / JDK 11 / 
org.apache.kafka.connect.integration.BlockingConnectorTest.testWorkerRestartWithBlockInConnectorStop
   ```
   
   unrelated flaky



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] chia7712 merged pull request #9758: MINOR: remove FetchResponse.AbortedTransaction and redundant construc…

2021-03-04 Thread GitBox


chia7712 merged pull request #9758:
URL: https://github.com/apache/kafka/pull/9758


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] dajac commented on a change in pull request #10078: MINOR: make sure all generated data tests cover all versions

2021-03-04 Thread GitBox


dajac commented on a change in pull request #10078:
URL: https://github.com/apache/kafka/pull/10078#discussion_r587357462



##
File path: 
clients/src/test/java/org/apache/kafka/common/requests/ProduceRequestTest.java
##
@@ -291,19 +292,11 @@ public void testMixedIdempotentData() {
 assertTrue(RequestTestUtils.hasIdempotentRecords(request));
 }
 
-private void 
assertThrowsInvalidRecordExceptionForAllVersions(ProduceRequest.Builder 
builder) {
-for (short version = builder.oldestAllowedVersion(); version < 
builder.latestAllowedVersion(); version++) {
-assertThrowsInvalidRecordException(builder, version);
-}
-}
-
-private void assertThrowsInvalidRecordException(ProduceRequest.Builder 
builder, short version) {
-try {
-builder.build(version).serialize();
-fail("Builder did not raise " + 
InvalidRecordException.class.getName() + " as expected");
-} catch (RuntimeException e) {
-
assertTrue(InvalidRecordException.class.isAssignableFrom(e.getClass()),
-"Unexpected exception type " + e.getClass().getName());
+private static  void 
assertThrowsInvalidRecordExceptionForAllVersions(ProduceRequest.Builder builder,

Review comment:
   Should we rename the method now that the exception is passed as an 
argument?

##
File path: 
clients/src/test/java/org/apache/kafka/common/requests/RequestResponseTest.java
##
@@ -295,7 +295,7 @@ public void testSerialization() throws Exception {
 checkRequest(createDeleteGroupsRequest(), true);
 checkErrorResponse(createDeleteGroupsRequest(), 
unknownServerException, true);
 checkResponse(createDeleteGroupsResponse(), 0, true);
-for (int i = 0; i < ApiKeys.LIST_OFFSETS.latestVersion(); i++) {
+for (int i = 0; i <= ApiKeys.LIST_OFFSETS.latestVersion(); i++) {

Review comment:
   Could we use `allVersions` here as well?

##
File path: 
clients/src/test/java/org/apache/kafka/common/requests/ProduceRequestTest.java
##
@@ -291,19 +292,11 @@ public void testMixedIdempotentData() {
 assertTrue(RequestTestUtils.hasIdempotentRecords(request));
 }
 
-private void 
assertThrowsInvalidRecordExceptionForAllVersions(ProduceRequest.Builder 
builder) {
-for (short version = builder.oldestAllowedVersion(); version < 
builder.latestAllowedVersion(); version++) {
-assertThrowsInvalidRecordException(builder, version);
-}
-}
-
-private void assertThrowsInvalidRecordException(ProduceRequest.Builder 
builder, short version) {
-try {
-builder.build(version).serialize();
-fail("Builder did not raise " + 
InvalidRecordException.class.getName() + " as expected");
-} catch (RuntimeException e) {
-
assertTrue(InvalidRecordException.class.isAssignableFrom(e.getClass()),
-"Unexpected exception type " + e.getClass().getName());
+private static  void 
assertThrowsInvalidRecordExceptionForAllVersions(ProduceRequest.Builder builder,
+   
Class expectedType) {
+for (short version = builder.oldestAllowedVersion(); version <= 
builder.latestAllowedVersion(); version++) {
+short v = version;

Review comment:
   Could we get rid of `v` and use `version` below directly?

##
File path: 
clients/src/test/java/org/apache/kafka/common/message/MessageTest.java
##
@@ -485,7 +485,7 @@ public void testOffsetCommitRequestVersions() throws 
Exception {
 if (version == 1) {
 testEquivalentMessageRoundTrip(version, requestData);
 } else if (version >= 2 && version <= 4) {
-testAllMessageRoundTripsBetweenVersions(version, (short) 4, 
requestData, requestData);
+testAllMessageRoundTripsBetweenVersions(version, (short) 5, 
requestData, requestData);

Review comment:
   Why are we changing this?

##
File path: 
clients/src/test/java/org/apache/kafka/common/requests/OffsetsForLeaderEpochRequestTest.java
##
@@ -34,15 +34,15 @@ public void testForConsumerRequiresVersion3() {
 assertThrows(UnsupportedVersionException.class, () -> 
builder.build(v));
 }
 
-for (short version = 3; version < 
ApiKeys.OFFSET_FOR_LEADER_EPOCH.latestVersion(); version++) {
+for (short version = 3; version <= 
ApiKeys.OFFSET_FOR_LEADER_EPOCH.latestVersion(); version++) {
 OffsetsForLeaderEpochRequest request = builder.build((short) 3);

Review comment:
   Should we replace `3` by `version` here?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, p

[GitHub] [kafka] tombentley commented on pull request #9826: KAFKA-10816: Initialize REST endpoints only after the herder has started

2021-03-04 Thread GitBox


tombentley commented on pull request #9826:
URL: https://github.com/apache/kafka/pull/9826#issuecomment-790541247


   @rhauch @kkonstantine @C0urante PTAL



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] chia7712 commented on a change in pull request #10078: MINOR: make sure all generated data tests cover all versions

2021-03-04 Thread GitBox


chia7712 commented on a change in pull request #10078:
URL: https://github.com/apache/kafka/pull/10078#discussion_r587443165



##
File path: 
clients/src/test/java/org/apache/kafka/common/requests/OffsetsForLeaderEpochRequestTest.java
##
@@ -34,15 +34,15 @@ public void testForConsumerRequiresVersion3() {
 assertThrows(UnsupportedVersionException.class, () -> 
builder.build(v));
 }
 
-for (short version = 3; version < 
ApiKeys.OFFSET_FOR_LEADER_EPOCH.latestVersion(); version++) {
+for (short version = 3; version <= 
ApiKeys.OFFSET_FOR_LEADER_EPOCH.latestVersion(); version++) {
 OffsetsForLeaderEpochRequest request = builder.build((short) 3);

Review comment:
   good catch. fixed.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] chia7712 commented on a change in pull request #10078: MINOR: make sure all generated data tests cover all versions

2021-03-04 Thread GitBox


chia7712 commented on a change in pull request #10078:
URL: https://github.com/apache/kafka/pull/10078#discussion_r587445495



##
File path: 
clients/src/test/java/org/apache/kafka/common/requests/ProduceRequestTest.java
##
@@ -291,19 +292,11 @@ public void testMixedIdempotentData() {
 assertTrue(RequestTestUtils.hasIdempotentRecords(request));
 }
 
-private void 
assertThrowsInvalidRecordExceptionForAllVersions(ProduceRequest.Builder 
builder) {
-for (short version = builder.oldestAllowedVersion(); version < 
builder.latestAllowedVersion(); version++) {
-assertThrowsInvalidRecordException(builder, version);
-}
-}
-
-private void assertThrowsInvalidRecordException(ProduceRequest.Builder 
builder, short version) {
-try {
-builder.build(version).serialize();
-fail("Builder did not raise " + 
InvalidRecordException.class.getName() + " as expected");
-} catch (RuntimeException e) {
-
assertTrue(InvalidRecordException.class.isAssignableFrom(e.getClass()),
-"Unexpected exception type " + e.getClass().getName());
+private static  void 
assertThrowsInvalidRecordExceptionForAllVersions(ProduceRequest.Builder builder,
+   
Class expectedType) {
+for (short version = builder.oldestAllowedVersion(); version <= 
builder.latestAllowedVersion(); version++) {
+short v = version;

Review comment:
   It requires final variable. I rewrite code by lambda foreach to 
eliminate duplicate intermediate variable.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] chia7712 commented on a change in pull request #10078: MINOR: make sure all generated data tests cover all versions

2021-03-04 Thread GitBox


chia7712 commented on a change in pull request #10078:
URL: https://github.com/apache/kafka/pull/10078#discussion_r587450954



##
File path: 
clients/src/test/java/org/apache/kafka/common/message/MessageTest.java
##
@@ -485,7 +485,7 @@ public void testOffsetCommitRequestVersions() throws 
Exception {
 if (version == 1) {
 testEquivalentMessageRoundTrip(version, requestData);
 } else if (version >= 2 && version <= 4) {
-testAllMessageRoundTripsBetweenVersions(version, (short) 4, 
requestData, requestData);
+testAllMessageRoundTripsBetweenVersions(version, (short) 5, 
requestData, requestData);

Review comment:
   `testAllMessageRoundTripsBetweenVersions` exclude the end number so it 
should pass 5 so as to test version 4





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] ijuma commented on pull request #9758: MINOR: remove FetchResponse.AbortedTransaction and redundant construc…

2021-03-04 Thread GitBox


ijuma commented on pull request #9758:
URL: https://github.com/apache/kafka/pull/9758#issuecomment-790614014


   Great to see this merged. @chia7712 Will you follow up with the items we 
split into separate JIRAs? I think the 3 key ones are:
   1. Remove unnecessary copying, if possible, in KafkaApis
   2. Remove the lazily populated map in FetchResponse
   3. I think you had an idea for removing `FetchResponse.toMessage`



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (KAFKA-12410) Remove FetchResponse#of

2021-03-04 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-12410:
--

 Summary: Remove FetchResponse#of
 Key: KAFKA-12410
 URL: https://issues.apache.org/jira/browse/KAFKA-12410
 Project: Kafka
  Issue Type: Improvement
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


That helper methods introduce a couple of collections/groups so it would be 
better to replace all usages by FetchResponseData.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] chia7712 commented on pull request #9758: MINOR: remove FetchResponse.AbortedTransaction and redundant construc…

2021-03-04 Thread GitBox


chia7712 commented on pull request #9758:
URL: https://github.com/apache/kafka/pull/9758#issuecomment-790617559


   > Remove unnecessary copying, if possible, in KafkaApis
   
   https://issues.apache.org/jira/browse/KAFKA-12385
   
   > Remove the lazily populated map in FetchResponse
   
   https://issues.apache.org/jira/browse/KAFKA-12387
   
   > I think you had an idea for removing FetchResponse.toMessage
   
   ugh, I missed this one. opened: 
https://issues.apache.org/jira/browse/KAFKA-12410



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] ijuma commented on a change in pull request #10203: MINOR: Prepare for Gradle 7.0

2021-03-04 Thread GitBox


ijuma commented on a change in pull request #10203:
URL: https://github.com/apache/kafka/pull/10203#discussion_r587468307



##
File path: build.gradle
##
@@ -1833,27 +1848,30 @@ project(':jmh-benchmarks') {
   apply plugin: 'com.github.johnrengelman.shadow'
 
   shadowJar {
-baseName = 'kafka-jmh-benchmarks-all'
-classifier = null

Review comment:
   Done.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (KAFKA-12411) Group Coordinator Followers failing with OutOfOrderOffsetException

2021-03-04 Thread Sandeep (Jira)
Sandeep created KAFKA-12411:
---

 Summary: Group Coordinator Followers failing with 
OutOfOrderOffsetException
 Key: KAFKA-12411
 URL: https://issues.apache.org/jira/browse/KAFKA-12411
 Project: Kafka
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Sandeep
 Attachments: replica_logs

Post group coordinator failure and new leader election the followers are 
failing with OffsetsOutOfOrderException. 

clearing follower log directory and restarting did not help.

 

Broker Version: 2.6.0

Zookeeper: 3.0.7

PFA for follower logs



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] ijuma commented on a change in pull request #10203: MINOR: Prepare for Gradle 7.0

2021-03-04 Thread GitBox


ijuma commented on a change in pull request #10203:
URL: https://github.com/apache/kafka/pull/10203#discussion_r587471462



##
File path: build.gradle
##
@@ -1987,28 +2005,30 @@ project(':connect:json') {
   archivesBaseName = "connect-json"
 
   dependencies {
-compile project(':connect:api')
-compile libs.jacksonDatabind
-compile libs.jacksonJDK8Datatypes
-compile libs.slf4jApi
+implementation project(':connect:api')

Review comment:
   To clarify, `compile` is more like `api` than `implementation`. I 
switched the dependency of `connect-api` for `connect-json` and 
`connect-transform` to `api` for now. We can change this in a future API, if we 
like, but it seems more consistent with the generally conservative approach 
we're taking in this PR.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] ijuma commented on a change in pull request #10203: MINOR: Prepare for Gradle 7.0

2021-03-04 Thread GitBox


ijuma commented on a change in pull request #10203:
URL: https://github.com/apache/kafka/pull/10203#discussion_r587471462



##
File path: build.gradle
##
@@ -1987,28 +2005,30 @@ project(':connect:json') {
   archivesBaseName = "connect-json"
 
   dependencies {
-compile project(':connect:api')
-compile libs.jacksonDatabind
-compile libs.jacksonJDK8Datatypes
-compile libs.slf4jApi
+implementation project(':connect:api')

Review comment:
   To clarify, `compile` is more like `api` than `implementation`. I 
switched the dependency of `connect-api` for `connect-json` and 
`connect-transform` to `api` for now. We can change this in a future PR, if we 
like, but it seems more consistent with the generally conservative approach 
we're taking in this PR.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (KAFKA-12412) Group Coordinator followers are failing with OffsetsOutOfOrderException

2021-03-04 Thread Sandeep (Jira)
Sandeep created KAFKA-12412:
---

 Summary: Group Coordinator followers are failing with 
OffsetsOutOfOrderException
 Key: KAFKA-12412
 URL: https://issues.apache.org/jira/browse/KAFKA-12412
 Project: Kafka
  Issue Type: Bug
Reporter: Sandeep
 Attachments: replica_logs

Upon failure of group coordinator, the followers of newly elected group 
coordinator are failing with  OffsetsOutOfOrderException

 

Kafka Broker Version: 2.6.0

Zookeeper version: 3.0.7

consumer API: 1.6.0

producer: libdirkafka: 0.9.1

PFA: follower logs



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12413) Group Coordinator followers are failing with OffsetsOutOfOrderException

2021-03-04 Thread Sandeep (Jira)
Sandeep created KAFKA-12413:
---

 Summary: Group Coordinator followers are failing with 
OffsetsOutOfOrderException
 Key: KAFKA-12413
 URL: https://issues.apache.org/jira/browse/KAFKA-12413
 Project: Kafka
  Issue Type: Bug
Reporter: Sandeep
 Attachments: replica_logs

Upon failure of group coordinator, the followers of newly elected group 
coordinator are failing with  OffsetsOutOfOrderException

Kafka Broker Version: 2.6.0
Zookeeper version: 3.0.7
consumer API: 1.6.0
producer: libdirkafka: 0.9.1
PFA: follower logs



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12415) Prepare for Gradle 7.0 and restrict transitive scope for non api dependencies

2021-03-04 Thread Ismael Juma (Jira)
Ismael Juma created KAFKA-12415:
---

 Summary: Prepare for Gradle 7.0 and restrict transitive scope for 
non api dependencies
 Key: KAFKA-12415
 URL: https://issues.apache.org/jira/browse/KAFKA-12415
 Project: Kafka
  Issue Type: Improvement
Reporter: Ismael Juma
Assignee: Ismael Juma
 Fix For: 3.0.0


Also required:
 * Migrate from maven plugin to maven-publish plugin
 * Migrate from java to java-library plugin



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12414) Group Coordinator followers are failing with OffsetsOutOfOrderException

2021-03-04 Thread Sandeep (Jira)
Sandeep created KAFKA-12414:
---

 Summary: Group Coordinator followers are failing with 
OffsetsOutOfOrderException
 Key: KAFKA-12414
 URL: https://issues.apache.org/jira/browse/KAFKA-12414
 Project: Kafka
  Issue Type: Bug
  Components: replication
Affects Versions: 2.6.0
Reporter: Sandeep
 Attachments: replica_logs

Upon failure of group coordinator, the followers of newly elected group 
coordinator are failing with  OffsetsOutOfOrderException

Kafka Broker Version: 2.6.0
Zookeeper version: 3.0.7
consumer API: 1.6.0
producer: libdirkafka: 0.9.1
PFA: follower logs



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12416) Group Coordinator followers are failing with OffsetsOutOfOrderException

2021-03-04 Thread Sandeep (Jira)
Sandeep created KAFKA-12416:
---

 Summary: Group Coordinator followers are failing with 
OffsetsOutOfOrderException
 Key: KAFKA-12416
 URL: https://issues.apache.org/jira/browse/KAFKA-12416
 Project: Kafka
  Issue Type: Bug
  Components: replication
Reporter: Sandeep
 Attachments: replica_logs

Upon failure of group coordinator, the followers of newly elected group 
coordinator are failing with  OffsetsOutOfOrderException

Kafka Broker Version: 2.6.0
Zookeeper version: 3.0.7
consumer API: 1.6.0
producer: libdirkafka: 0.9.1
PFA: [^replica_logs]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] jeqo commented on a change in pull request #10042: KAFKA-9527: fix NPE when using time-based argument for Stream Resetter

2021-03-04 Thread GitBox


jeqo commented on a change in pull request #10042:
URL: https://github.com/apache/kafka/pull/10042#discussion_r587485775



##
File path: core/src/main/scala/kafka/tools/StreamsResetter.java
##
@@ -503,7 +506,16 @@ private void resetToDatetime(final Consumer client,
 final Map topicPartitionsAndOffset 
= client.offsetsForTimes(topicPartitionsAndTimes);
 
 for (final TopicPartition topicPartition : inputTopicPartitions) {
-client.seek(topicPartition, 
topicPartitionsAndOffset.get(topicPartition).offset());
+final Optional partitionOffset = 
Optional.ofNullable(topicPartitionsAndOffset.get(topicPartition))
+.map(OffsetAndTimestamp::offset)
+.filter(offset -> offset != 
ListOffsetsResponse.UNKNOWN_OFFSET);
+if (partitionOffset.isPresent()) {
+client.seek(topicPartition, partitionOffset.get());
+} else {
+client.seekToEnd(Collections.singletonList(topicPartition));
+System.out.println("Partition " + topicPartition.partition() + 
" from topic " + topicPartition.topic() +
+" is empty, without a committed record. Falling back 
to latest known offset.");

Review comment:
   Empty should be clear enough. Will update it in #10092 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] jeqo commented on a change in pull request #10042: KAFKA-9527: fix NPE when using time-based argument for Stream Resetter

2021-03-04 Thread GitBox


jeqo commented on a change in pull request #10042:
URL: https://github.com/apache/kafka/pull/10042#discussion_r587485775



##
File path: core/src/main/scala/kafka/tools/StreamsResetter.java
##
@@ -503,7 +506,16 @@ private void resetToDatetime(final Consumer client,
 final Map topicPartitionsAndOffset 
= client.offsetsForTimes(topicPartitionsAndTimes);
 
 for (final TopicPartition topicPartition : inputTopicPartitions) {
-client.seek(topicPartition, 
topicPartitionsAndOffset.get(topicPartition).offset());
+final Optional partitionOffset = 
Optional.ofNullable(topicPartitionsAndOffset.get(topicPartition))
+.map(OffsetAndTimestamp::offset)
+.filter(offset -> offset != 
ListOffsetsResponse.UNKNOWN_OFFSET);
+if (partitionOffset.isPresent()) {
+client.seek(topicPartition, partitionOffset.get());
+} else {
+client.seekToEnd(Collections.singletonList(topicPartition));
+System.out.println("Partition " + topicPartition.partition() + 
" from topic " + topicPartition.topic() +
+" is empty, without a committed record. Falling back 
to latest known offset.");

Review comment:
   "empty" should be clear enough. Will update it in #10092 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] ijuma commented on a change in pull request #10203: KAFKA-12415 Prepare for Gradle 7.0 and restrict transitive scope for non api dependencies

2021-03-04 Thread GitBox


ijuma commented on a change in pull request #10203:
URL: https://github.com/apache/kafka/pull/10203#discussion_r587485903



##
File path: build.gradle
##
@@ -1468,7 +1461,7 @@ project(':streams') {
   include('log4j*jar')
   include('*hamcrest*')
 }
-from (configurations.runtime) {
+from (configurations.runtimeClasspath) {

Review comment:
   https://issues.apache.org/jira/browse/KAFKA-12417





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (KAFKA-12417) streams module should use `testRuntimeClasspath` instead of `testRuntime` configuration

2021-03-04 Thread Ismael Juma (Jira)
Ismael Juma created KAFKA-12417:
---

 Summary: streams module should use `testRuntimeClasspath` instead 
of `testRuntime` configuration
 Key: KAFKA-12417
 URL: https://issues.apache.org/jira/browse/KAFKA-12417
 Project: Kafka
  Issue Type: Improvement
Reporter: Ismael Juma


The streams module has the following code:
{code:java}
tasks.create(name: "copyDependantLibs", type: Copy) {
  from (configurations.testRuntime) {
include('slf4j-log4j12*')
include('log4j*jar')
include('*hamcrest*')
  }
  from (configurations.runtimeClasspath) {
exclude('kafka-clients*')
  }
  into "$buildDir/dependant-libs-${versions.scala}"
  duplicatesStrategy 'exclude'
} {code}
{{configurations.testRuntime}} should be changed to 
{{configurations.testRuntimeClasspath}} as the former has been removed in 
Gradle 7.0, but this causes a cyclic build error.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12418) Make sure it's ok not to include test jars in the release tarball

2021-03-04 Thread Ismael Juma (Jira)
Ismael Juma created KAFKA-12418:
---

 Summary: Make sure it's ok not to include test jars in the release 
tarball
 Key: KAFKA-12418
 URL: https://issues.apache.org/jira/browse/KAFKA-12418
 Project: Kafka
  Issue Type: Improvement
Reporter: Ismael Juma
Assignee: Ismael Juma
 Fix For: 3.0.0


As of [https://github.com/apache/kafka/pull/10203,] the release tarball no 
longer includes includes test, sources, javadoc and test sources jars. These 
are still published to the Maven Central repository.

This seems like a good change and 3.0.0 would be a good time to do it, but 
filing this JIRA to follow up and make sure before said release.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] ijuma commented on pull request #10203: KAFKA-12415 Prepare for Gradle 7.0 and restrict transitive scope for non api dependencies

2021-03-04 Thread GitBox


ijuma commented on pull request #10203:
URL: https://github.com/apache/kafka/pull/10203#issuecomment-790636038


   @chia7712 I filed https://issues.apache.org/jira/browse/KAFKA-12418 to 
follow up on the jars that are no longer included in the release tarball. It 
seems OK to me, but I'll do a bit more due dilligence before the 3.0 release.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] ijuma edited a comment on pull request #10203: KAFKA-12415 Prepare for Gradle 7.0 and restrict transitive scope for non api dependencies

2021-03-04 Thread GitBox


ijuma edited a comment on pull request #10203:
URL: https://github.com/apache/kafka/pull/10203#issuecomment-790636038


   @chia7712 I filed https://issues.apache.org/jira/browse/KAFKA-12418 to 
follow up on the jars that are no longer included in the release tarball. It 
seems OK to me, but I'll do a bit more due dilligence before the 3.0.0 release.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] lindong28 commented on pull request #10257: KAFKA-12407: Document omitted Controller Health Metrics

2021-03-04 Thread GitBox


lindong28 commented on pull request #10257:
URL: https://github.com/apache/kafka/pull/10257#issuecomment-790640953


   Thanks you @dongjinleekr for the improvement! LGTM.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] ijuma commented on a change in pull request #10131: KAFKA-5146 / move kafka-streams example to a new module

2021-03-04 Thread GitBox


ijuma commented on a change in pull request #10131:
URL: https://github.com/apache/kafka/pull/10131#discussion_r587501265



##
File path: build.gradle
##
@@ -974,8 +974,8 @@ project(':core') {
 from(project(':streams:streams-scala').configurations.runtime) { 
into("libs/") }
 from(project(':streams:test-utils').jar) { into("libs/") }
 from(project(':streams:test-utils').configurations.runtime) { 
into("libs/") }
-from(project(':streams:examples').jar) { into("libs/") }
-from(project(':streams:examples').configurations.runtime) { into("libs/") }
+from(project(':streams-examples').jar) { into("libs/") }
+from(project(':streams-examples').configurations.runtime) { into("libs/") }

Review comment:
   @ableegoldman No. I removed the `connect-json` dependency from `streams` 
and left it in `streams:examples` and there is no longer a `streams` dependency:
   
   > compileClasspath - Compile classpath for source set 'main'.
   > +--- project :clients
   > +--- org.slf4j:slf4j-api:1.7.30
   > +--- org.rocksdb:rocksdbjni:5.18.4
   > +--- com.fasterxml.jackson.core:jackson-annotations:2.10.5
   > \--- com.fasterxml.jackson.core:jackson-databind:2.10.5.1
   >  +--- com.fasterxml.jackson.core:jackson-annotations:2.10.5
   >  \--- com.fasterxml.jackson.core:jackson-core:2.10.5
   > 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] ijuma commented on a change in pull request #10131: KAFKA-5146 / move kafka-streams example to a new module

2021-03-04 Thread GitBox


ijuma commented on a change in pull request #10131:
URL: https://github.com/apache/kafka/pull/10131#discussion_r587501265



##
File path: build.gradle
##
@@ -974,8 +974,8 @@ project(':core') {
 from(project(':streams:streams-scala').configurations.runtime) { 
into("libs/") }
 from(project(':streams:test-utils').jar) { into("libs/") }
 from(project(':streams:test-utils').configurations.runtime) { 
into("libs/") }
-from(project(':streams:examples').jar) { into("libs/") }
-from(project(':streams:examples').configurations.runtime) { into("libs/") }
+from(project(':streams-examples').jar) { into("libs/") }
+from(project(':streams-examples').configurations.runtime) { into("libs/") }

Review comment:
   @ableegoldman No. I removed the `connect-json` dependency from `streams` 
and left it in `streams:examples` and the dependency is gone from the `streams` 
module:
   
   > compileClasspath - Compile classpath for source set 'main'.
   > +--- project :clients
   > +--- org.slf4j:slf4j-api:1.7.30
   > +--- org.rocksdb:rocksdbjni:5.18.4
   > +--- com.fasterxml.jackson.core:jackson-annotations:2.10.5
   > \--- com.fasterxml.jackson.core:jackson-databind:2.10.5.1
   >  +--- com.fasterxml.jackson.core:jackson-annotations:2.10.5
   >  \--- com.fasterxml.jackson.core:jackson-core:2.10.5
   > 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] ijuma commented on a change in pull request #10131: KAFKA-5146 / move kafka-streams example to a new module

2021-03-04 Thread GitBox


ijuma commented on a change in pull request #10131:
URL: https://github.com/apache/kafka/pull/10131#discussion_r587502355



##
File path: build.gradle
##
@@ -974,8 +974,8 @@ project(':core') {
 from(project(':streams:streams-scala').configurations.runtime) { 
into("libs/") }
 from(project(':streams:test-utils').jar) { into("libs/") }
 from(project(':streams:test-utils').configurations.runtime) { 
into("libs/") }
-from(project(':streams:examples').jar) { into("libs/") }
-from(project(':streams:examples').configurations.runtime) { into("libs/") }
+from(project(':streams-examples').jar) { into("libs/") }
+from(project(':streams-examples').configurations.runtime) { into("libs/") }

Review comment:
   I think that's all you need to do.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] MarcoLotz commented on a change in pull request #10131: KAFKA-5146 / move kafka-streams example to a new module

2021-03-04 Thread GitBox


MarcoLotz commented on a change in pull request #10131:
URL: https://github.com/apache/kafka/pull/10131#discussion_r587511329



##
File path: build.gradle
##
@@ -974,8 +974,8 @@ project(':core') {
 from(project(':streams:streams-scala').configurations.runtime) { 
into("libs/") }
 from(project(':streams:test-utils').jar) { into("libs/") }
 from(project(':streams:test-utils').configurations.runtime) { 
into("libs/") }
-from(project(':streams:examples').jar) { into("libs/") }
-from(project(':streams:examples').configurations.runtime) { into("libs/") }
+from(project(':streams-examples').jar) { into("libs/") }
+from(project(':streams-examples').configurations.runtime) { into("libs/") }

Review comment:
   Sounds good. I will send a PR later this week fixing it to be as 
discussed here then. Probably even tonight (CET).





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] lindong28 merged pull request #10257: KAFKA-12407: Document Controller Health Metrics

2021-03-04 Thread GitBox


lindong28 merged pull request #10257:
URL: https://github.com/apache/kafka/pull/10257


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (KAFKA-12407) Document omitted Controller Health Metrics

2021-03-04 Thread Dong Lin (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dong Lin resolved KAFKA-12407.
--
Resolution: Fixed

> Document omitted Controller Health Metrics
> --
>
> Key: KAFKA-12407
> URL: https://issues.apache.org/jira/browse/KAFKA-12407
> Project: Kafka
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Dongjin Lee
>Assignee: Dongjin Lee
>Priority: Major
>
> [KIP-237|https://cwiki.apache.org/confluence/display/KAFKA/KIP-237%3A+More+Controller+Health+Metrics]
>  introduced 3 controller health metics like the following, but none of them 
> are documented.
>  * kafka.controller:type=ControllerEventManager,name=EventQueueSize
>  * kafka.controller:type=ControllerEventManager,name=EventQueueTimeMs
>  * 
> kafka.controller:type=ControllerChannelManager,name=RequestRateAndQueueTimeMs,brokerId=\{broker-id}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12393) Document multi-tenancy considerations

2021-03-04 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17295308#comment-17295308
 ] 

ASF GitHub Bot commented on KAFKA-12393:


bbejeck merged pull request #334:
URL: https://github.com/apache/kafka-site/pull/334


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Document multi-tenancy considerations
> -
>
> Key: KAFKA-12393
> URL: https://issues.apache.org/jira/browse/KAFKA-12393
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation
>Reporter: Michael G. Noll
>Assignee: Michael G. Noll
>Priority: Minor
>
> We should provide an overview of multi-tenancy consideration (e.g., user 
> spaces, security) as the current documentation lacks such information.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] tombentley opened a new pull request #10262: MINOR: Add missing log argument

2021-03-04 Thread GitBox


tombentley opened a new pull request #10262:
URL: https://github.com/apache/kafka/pull/10262


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] tombentley commented on pull request #10262: MINOR: Add missing log argument

2021-03-04 Thread GitBox


tombentley commented on pull request #10262:
URL: https://github.com/apache/kafka/pull/10262#issuecomment-790667419


   @hachikuji @cmccabe @mumrah this fixes a very trivial pb with a log.trace, 
please could one of you review?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] tombentley commented on a change in pull request #10253: KAFKA-12376: Apply atomic append to the log

2021-03-04 Thread GitBox


tombentley commented on a change in pull request #10253:
URL: https://github.com/apache/kafka/pull/10253#discussion_r587533515



##
File path: 
metadata/src/test/java/org/apache/kafka/controller/FeatureControlManagerTest.java
##
@@ -101,32 +101,55 @@ public void testUpdateFeaturesErrorCases() {
 SnapshotRegistry snapshotRegistry = new SnapshotRegistry(new 
LogContext());
 FeatureControlManager manager = new FeatureControlManager(
 rangeMap("foo", 1, 5, "bar", 1, 2), snapshotRegistry);
-assertEquals(new ControllerResult<>(Collections.
-singletonMap("foo", new ApiError(Errors.INVALID_UPDATE_VERSION,
-"Broker 5 does not support the given feature range."))),
-manager.updateFeatures(rangeMap("foo", 1, 3),
+
+assertEquals(
+ControllerResult.of(
+Collections.emptyList(),
+Collections.singletonMap(
+"foo",
+new ApiError(
+Errors.INVALID_UPDATE_VERSION,
+"Broker 5 does not support the given feature range."
+)
+)
+),
+manager.updateFeatures(
+rangeMap("foo", 1, 3),
 new HashSet<>(Arrays.asList("foo")),

Review comment:
   `Collections.singleton()`?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] tombentley commented on a change in pull request #10253: KAFKA-12376: Apply atomic append to the log

2021-03-04 Thread GitBox


tombentley commented on a change in pull request #10253:
URL: https://github.com/apache/kafka/pull/10253#discussion_r587534836



##
File path: metadata/src/main/java/org/apache/kafka/metalog/MetaLogManager.java
##
@@ -50,13 +50,30 @@
  * offset before renouncing its leadership.  The listener should determine 
this by
  * monitoring the committed offsets.
  *
- * @param epoch The controller epoch.
- * @param batch The batch of messages to write.
+ * @param epoch the controller epoch
+ * @param batch the batch of messages to write

Review comment:
   The other methods in the class consistently end `@param` with a period. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] miguno opened a new pull request #10263: KAFKA-12393: Document multi-tenancy considerations (#334)

2021-03-04 Thread GitBox


miguno opened a new pull request #10263:
URL: https://github.com/apache/kafka/pull/10263


   * KAFKA-12393: Document multi-tenancy considerations
   * Addressed review feedback by @dajac and @rajinisivaram
   
   Ported from https://github.com/apache/kafka-site/pull/334
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] miguno commented on pull request #10263: KAFKA-12393: Document multi-tenancy considerations

2021-03-04 Thread GitBox


miguno commented on pull request #10263:
URL: https://github.com/apache/kafka/pull/10263#issuecomment-790697832


   cc @bbejeck 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] highluck commented on a change in pull request #9851: KAFKA-10769 Remove JoinGroupRequest#containsValidPattern as it is dup…

2021-03-04 Thread GitBox


highluck commented on a change in pull request #9851:
URL: https://github.com/apache/kafka/pull/9851#discussion_r587578540



##
File path: 
clients/src/main/java/org/apache/kafka/common/requests/JoinGroupRequest.java
##
@@ -59,38 +59,12 @@ public String toString() {
 public static final int UNKNOWN_GENERATION_ID = -1;
 public static final String UNKNOWN_PROTOCOL_NAME = "";
 
-private static final int MAX_GROUP_INSTANCE_ID_LENGTH = 249;
-
 /**
  * Ported from class Topic in {@link org.apache.kafka.common.internals} to 
restrict the charset for
  * static member id.
  */
 public static void validateGroupInstanceId(String id) {
-if (id.equals(""))
-throw new InvalidConfigurationException("Group instance id must be 
non-empty string");

Review comment:
   ping @chia7712 
   thanks





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] bbejeck commented on pull request #10263: KAFKA-12393: Document multi-tenancy considerations

2021-03-04 Thread GitBox


bbejeck commented on pull request #10263:
URL: https://github.com/apache/kafka/pull/10263#issuecomment-790711383


   This is a duplication of https://github.com/apache/kafka-site/pull/334 
already merged to kafka-site



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] bbejeck edited a comment on pull request #10263: KAFKA-12393: Document multi-tenancy considerations

2021-03-04 Thread GitBox


bbejeck edited a comment on pull request #10263:
URL: https://github.com/apache/kafka/pull/10263#issuecomment-790711383


   ~This is a port of https://github.com/apache/kafka-site/pull/334 already 
merged to kafka-site~
   
   I didn't see @miguno's comment above referencing the kafka-site PR before.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] bbejeck merged pull request #10263: KAFKA-12393: Document multi-tenancy considerations

2021-03-04 Thread GitBox


bbejeck merged pull request #10263:
URL: https://github.com/apache/kafka/pull/10263


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] bbejeck commented on pull request #10263: KAFKA-12393: Document multi-tenancy considerations

2021-03-04 Thread GitBox


bbejeck commented on pull request #10263:
URL: https://github.com/apache/kafka/pull/10263#issuecomment-790715340


   Merged #10263 into trunk



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] bbejeck commented on pull request #10263: KAFKA-12393: Document multi-tenancy considerations

2021-03-04 Thread GitBox


bbejeck commented on pull request #10263:
URL: https://github.com/apache/kafka/pull/10263#issuecomment-790717480


   cherry-picked to 2.8



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (KAFKA-12393) Document multi-tenancy considerations

2021-03-04 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-12393:

Fix Version/s: 2.8.0
   3.0.0

> Document multi-tenancy considerations
> -
>
> Key: KAFKA-12393
> URL: https://issues.apache.org/jira/browse/KAFKA-12393
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation
>Reporter: Michael G. Noll
>Assignee: Michael G. Noll
>Priority: Minor
> Fix For: 3.0.0, 2.8.0
>
>
> We should provide an overview of multi-tenancy consideration (e.g., user 
> spaces, security) as the current documentation lacks such information.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12393) Document multi-tenancy considerations

2021-03-04 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-12393.
-
Resolution: Fixed

Merged to trunk and cherry-picked to 2.8

> Document multi-tenancy considerations
> -
>
> Key: KAFKA-12393
> URL: https://issues.apache.org/jira/browse/KAFKA-12393
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation
>Reporter: Michael G. Noll
>Assignee: Michael G. Noll
>Priority: Minor
> Fix For: 3.0.0, 2.8.0
>
>
> We should provide an overview of multi-tenancy consideration (e.g., user 
> spaces, security) as the current documentation lacks such information.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] mumrah commented on a change in pull request #10253: KAFKA-12376: Apply atomic append to the log

2021-03-04 Thread GitBox


mumrah commented on a change in pull request #10253:
URL: https://github.com/apache/kafka/pull/10253#discussion_r587595181



##
File path: metadata/src/main/java/org/apache/kafka/metalog/LocalLogManager.java
##
@@ -328,8 +328,21 @@ public void register(MetaLogListener listener) throws 
Exception {
 
 @Override
 public long scheduleWrite(long epoch, List batch) {
-return shared.tryAppend(nodeId, leader.epoch(), new LocalRecordBatch(
-batch.stream().map(r -> 
r.message()).collect(Collectors.toList(;
+return scheduleAtomicWrite(epoch, batch);
+}
+
+@Override
+public long scheduleAtomicWrite(long epoch, List 
batch) {
+return shared.tryAppend(

Review comment:
   Yea, sounds good. I actually agree it's more readable now





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] mumrah commented on a change in pull request #10253: KAFKA-12376: Apply atomic append to the log

2021-03-04 Thread GitBox


mumrah commented on a change in pull request #10253:
URL: https://github.com/apache/kafka/pull/10253#discussion_r587598524



##
File path: 
metadata/src/main/java/org/apache/kafka/controller/FeatureControlManager.java
##
@@ -69,7 +69,12 @@
 results.put(entry.getKey(), updateFeature(entry.getKey(), 
entry.getValue(),
 downgradeables.contains(entry.getKey()), brokerFeatures, 
records));
 }
-return new ControllerResult<>(records, results);
+
+if (records.isEmpty()) {

Review comment:
   Any reason why we are calling the non-atomic method for an empty record 
set? Seems like we don't bother doing this in the other control manager 
classes. Can we just do `ControllerResult.atomicOf(records, results)` 
regardless of empty records?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] chia7712 merged pull request #10078: MINOR: make sure all generated data tests cover all versions

2021-03-04 Thread GitBox


chia7712 merged pull request #10078:
URL: https://github.com/apache/kafka/pull/10078


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] C0urante commented on pull request #9826: KAFKA-10816: Initialize REST endpoints only after the herder has started

2021-03-04 Thread GitBox


C0urante commented on pull request #9826:
URL: https://github.com/apache/kafka/pull/9826#issuecomment-790751172


   Thanks for taking this on, Tom!
   
   This seems fairly involved and I'm wondering if there's a simpler solution. 
The changes here cause `DistributedHerder::start` to block on the completion of 
`DistributedHerder::startServices`, which is invoked on a separate thread. If 
we want to retain this blocking behavior, could we move the invocation of 
`DistributedHerder::startServices` into the `DistributedHerder::start` method? 
I'm thinking something like this:
   
   ```java
   public class DistributedHerder extends AbstractHerder {
 @Override
 public void start() {
   log.info("Herder starting");
   startServices();
   log.info("Herder started");
   
   this.herderExecutor.submit(this);
 }
   }
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] C0urante edited a comment on pull request #9826: KAFKA-10816: Initialize REST endpoints only after the herder has started

2021-03-04 Thread GitBox


C0urante edited a comment on pull request #9826:
URL: https://github.com/apache/kafka/pull/9826#issuecomment-790751172


   Thanks for taking this on, Tom!
   
   This seems fairly involved and I'm wondering if there's a simpler solution. 
The changes here cause `DistributedHerder::start` to block on the completion of 
`DistributedHerder::startServices`, which is invoked on a separate thread. If 
we want to retain this blocking behavior, could we move the invocation of 
`DistributedHerder::startServices` into the `DistributedHerder::start` method? 
I'm thinking something like this:
   
   ```java
   public class DistributedHerder extends AbstractHerder {
 @Override
 public void start() {
   log.info("Herder starting");
   startServices();
   log.info("Herder started");
   
   this.herderExecutor.submit(this);
 }
   }
   ```
   
   Also, I haven't looked at standalone mode yet, but I'm hoping that any 
solution we arrive at can work for both it and distributed mode in order to 
avoid gotchas due to unexpected differences in behavior between the two.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] jsancio commented on a change in pull request #10253: KAFKA-12376: Apply atomic append to the log

2021-03-04 Thread GitBox


jsancio commented on a change in pull request #10253:
URL: https://github.com/apache/kafka/pull/10253#discussion_r587635154



##
File path: metadata/src/main/java/org/apache/kafka/metalog/MetaLogManager.java
##
@@ -50,13 +50,30 @@
  * offset before renouncing its leadership.  The listener should determine 
this by
  * monitoring the committed offsets.
  *
- * @param epoch The controller epoch.
- * @param batch The batch of messages to write.
+ * @param epoch the controller epoch
+ * @param batch the batch of messages to write

Review comment:
   These are phrases hence no period. For example none of the javadoc for 
the Java library have periods.
   
   I only modified the javadoc that I needed to update to not distract from the 
PR. We should do another pass and fix all of the comments.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (KAFKA-12391) Add an option to store arbitrary metadata to a SourceRecord

2021-03-04 Thread Chris Egerton (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17295401#comment-17295401
 ] 

Chris Egerton commented on KAFKA-12391:
---

[~lb] I believe this is already possible by a few methods:
 # Add headers to the record (for example, via [this 
constructor|https://kafka.apache.org/27/javadoc/org/apache/kafka/connect/source/SourceRecord.html#SourceRecord-java.util.Map-java.util.Map-java.lang.String-java.lang.Integer-org.apache.kafka.connect.data.Schema-java.lang.Object-org.apache.kafka.connect.data.Schema-java.lang.Object-java.lang.Long-java.lang.Iterable-]).
 These will appear in the message stored in Kafka as headers, but won't be part 
of the message's key or value. If there's a chance users might benefit from 
this housekeeping metadata, this approach is probably the best.
 # Create a subclass of {{SourceRecord}} and cast to that subclass in your 
{{SourceTask::commitRecord}} method. This subclass could include your custom 
{{attributes}} method, for example. This approach would be best if there is 
absolutely no benefit to downstream users of being able to read this extra 
metadata.

> Add an option to store arbitrary metadata to a SourceRecord
> ---
>
> Key: KAFKA-12391
> URL: https://issues.apache.org/jira/browse/KAFKA-12391
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Luca Burgazzoli
>Priority: Minor
>
> When writing Source Connectors for Kafka, it may be required to perform some 
> additional house cleaning when an record has been acknowledged by the Kafka 
> broker and as today, it is possible to set up an hook by overriding 
> SourceTask.commitRecord(SourceRecord).
> This works fine in most of the cases but to make it easy for the source 
> connector to perform it's internal house keeping, it would be nice to have an 
> option to set some additional metadata to the SourceRecord without having 
> impacts to the Record sent to the Kafka Broker, something like:
> {code:java}
> class SourceRecord {
> public SourceRecord(
> ...,
> Map attributes) {
> ...
> this.attributes = attributes;
> }
> Map attributes() { 
> return attributes;
> }
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (KAFKA-12391) Add an option to store arbitrary metadata to a SourceRecord

2021-03-04 Thread Chris Egerton (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17295401#comment-17295401
 ] 

Chris Egerton edited comment on KAFKA-12391 at 3/4/21, 4:50 PM:


[~lb] I believe this is already possible with a few approaches:
 # Add headers to the record (for example, via [this 
constructor|https://kafka.apache.org/27/javadoc/org/apache/kafka/connect/source/SourceRecord.html#SourceRecord-java.util.Map-java.util.Map-java.lang.String-java.lang.Integer-org.apache.kafka.connect.data.Schema-java.lang.Object-org.apache.kafka.connect.data.Schema-java.lang.Object-java.lang.Long-java.lang.Iterable-]).
 These will appear in the message stored in Kafka as headers, but won't be part 
of the message's key or value. If there's a chance users might benefit from 
this housekeeping metadata, this approach is probably the best.
 # Create a subclass of {{SourceRecord}} and cast to that subclass in your 
{{SourceTask::commitRecord}} method. This subclass could include your custom 
{{attributes}} method, for example. This approach would be best if there is 
absolutely no benefit to downstream users of being able to read this extra 
metadata.


was (Author: chrisegerton):
[~lb] I believe this is already possible by a few methods:
 # Add headers to the record (for example, via [this 
constructor|https://kafka.apache.org/27/javadoc/org/apache/kafka/connect/source/SourceRecord.html#SourceRecord-java.util.Map-java.util.Map-java.lang.String-java.lang.Integer-org.apache.kafka.connect.data.Schema-java.lang.Object-org.apache.kafka.connect.data.Schema-java.lang.Object-java.lang.Long-java.lang.Iterable-]).
 These will appear in the message stored in Kafka as headers, but won't be part 
of the message's key or value. If there's a chance users might benefit from 
this housekeeping metadata, this approach is probably the best.
 # Create a subclass of {{SourceRecord}} and cast to that subclass in your 
{{SourceTask::commitRecord}} method. This subclass could include your custom 
{{attributes}} method, for example. This approach would be best if there is 
absolutely no benefit to downstream users of being able to read this extra 
metadata.

> Add an option to store arbitrary metadata to a SourceRecord
> ---
>
> Key: KAFKA-12391
> URL: https://issues.apache.org/jira/browse/KAFKA-12391
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Luca Burgazzoli
>Priority: Minor
>
> When writing Source Connectors for Kafka, it may be required to perform some 
> additional house cleaning when an record has been acknowledged by the Kafka 
> broker and as today, it is possible to set up an hook by overriding 
> SourceTask.commitRecord(SourceRecord).
> This works fine in most of the cases but to make it easy for the source 
> connector to perform it's internal house keeping, it would be nice to have an 
> option to set some additional metadata to the SourceRecord without having 
> impacts to the Record sent to the Kafka Broker, something like:
> {code:java}
> class SourceRecord {
> public SourceRecord(
> ...,
> Map attributes) {
> ...
> this.attributes = attributes;
> }
> Map attributes() { 
> return attributes;
> }
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12419) Remove Deprecated APIs of Kafka Streams in 3.0

2021-03-04 Thread Guozhang Wang (Jira)
Guozhang Wang created KAFKA-12419:
-

 Summary: Remove Deprecated APIs of Kafka Streams in 3.0
 Key: KAFKA-12419
 URL: https://issues.apache.org/jira/browse/KAFKA-12419
 Project: Kafka
  Issue Type: Improvement
Reporter: Guozhang Wang
 Fix For: 3.0.0


Here's a list of deprecated APIs that we have accumulated in the past, we can 
consider removing them in 3.0:

* KIP-198: "--zookeeper" flag from StreamsResetter (1.0)
* KIP-171: "–execute" flag from StreamsResetter (1.1)
* KIP-233: overloaded "StreamsBuilder#addGlobalStore" (1.1)
* KIP-251: overloaded "ProcessorContext#forward" (2.0)
* KIP-276: "StreamsConfig#getConsumerConfig" (2.0)
* KIP-319: "WindowBytesStoreSupplier#segments" (2.1)
* KIP-321: "TopologyDescription.Source#topics" (2.1)
* KIP-328: "Windows#until/segmentInterval/maintainMS" (2.1)
* KIP-358: "Windows/Materialized" overloaded functions with `long` (2.1)
* KIP-365/366: Implicit Scala Apis (2.1)
* KIP-372: overloaded "KStream#groupBy" (2.1)
* KIP-307: "Joined#named" (2.3)
* KIP-345: Broker config "group.initial.rebalance.delay.ms" (2.3)
* KIP-429: "PartitionAssignor" interface (2.4)
* KIP-470: "TopologyTestDriver#pipeInput" (2.4)
* KIP-476: overloaded "KafkaClientSupplier#getAdminClient" (2.4)
* KIP-479: overloaded "KStream#join" (2.4)
* KIP-530: old "UsePreviousTimeOnInvalidTimeStamp" (2.5)
* KIP-535 / 562: overloaded "KafkaStreams#metadataForKey" and 
"KafkaStreams#store" (2.5)

And here's a list of already filed JIRAs for removing deprecated APIs
* KAFKA-10434
* KAFKA-7785



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12391) Add an option to store arbitrary metadata to a SourceRecord

2021-03-04 Thread Luca Burgazzoli (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17295425#comment-17295425
 ] 

Luca Burgazzoli commented on KAFKA-12391:
-

[~ChrisEgerton] 

Option 1 is the first I took into account but since the housekeeping does not 
make any sense outside the source connector, I'd prefer not to push it 
downstream.

I'll have a look at option 2 which seems to be good enough for some simple 
cases, however the reason I think metadata is better, is that sub-classing 
would assume that the record the connector creates is also the record you 
receive in the callback but - unless I've missed something - that may not be 
true if a SMT during the transformation process, creates a copy of the record.

> Add an option to store arbitrary metadata to a SourceRecord
> ---
>
> Key: KAFKA-12391
> URL: https://issues.apache.org/jira/browse/KAFKA-12391
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Luca Burgazzoli
>Priority: Minor
>
> When writing Source Connectors for Kafka, it may be required to perform some 
> additional house cleaning when an record has been acknowledged by the Kafka 
> broker and as today, it is possible to set up an hook by overriding 
> SourceTask.commitRecord(SourceRecord).
> This works fine in most of the cases but to make it easy for the source 
> connector to perform it's internal house keeping, it would be nice to have an 
> option to set some additional metadata to the SourceRecord without having 
> impacts to the Record sent to the Kafka Broker, something like:
> {code:java}
> class SourceRecord {
> public SourceRecord(
> ...,
> Map attributes) {
> ...
> this.attributes = attributes;
> }
> Map attributes() { 
> return attributes;
> }
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-12419) Remove Deprecated APIs of Kafka Streams in 3.0

2021-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-12419:

Component/s: streams-test-utils
 streams

> Remove Deprecated APIs of Kafka Streams in 3.0
> --
>
> Key: KAFKA-12419
> URL: https://issues.apache.org/jira/browse/KAFKA-12419
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams, streams-test-utils
>Reporter: Guozhang Wang
>Priority: Major
> Fix For: 3.0.0
>
>
> Here's a list of deprecated APIs that we have accumulated in the past, we can 
> consider removing them in 3.0:
> * KIP-198: "--zookeeper" flag from StreamsResetter (1.0)
> * KIP-171: "–execute" flag from StreamsResetter (1.1)
> * KIP-233: overloaded "StreamsBuilder#addGlobalStore" (1.1)
> * KIP-251: overloaded "ProcessorContext#forward" (2.0)
> * KIP-276: "StreamsConfig#getConsumerConfig" (2.0)
> * KIP-319: "WindowBytesStoreSupplier#segments" (2.1)
> * KIP-321: "TopologyDescription.Source#topics" (2.1)
> * KIP-328: "Windows#until/segmentInterval/maintainMS" (2.1)
> * KIP-358: "Windows/Materialized" overloaded functions with `long` (2.1)
> * KIP-365/366: Implicit Scala Apis (2.1)
> * KIP-372: overloaded "KStream#groupBy" (2.1)
> * KIP-307: "Joined#named" (2.3)
> * KIP-345: Broker config "group.initial.rebalance.delay.ms" (2.3)
> * KIP-429: "PartitionAssignor" interface (2.4)
> * KIP-470: "TopologyTestDriver#pipeInput" (2.4)
> * KIP-476: overloaded "KafkaClientSupplier#getAdminClient" (2.4)
> * KIP-479: overloaded "KStream#join" (2.4)
> * KIP-530: old "UsePreviousTimeOnInvalidTimeStamp" (2.5)
> * KIP-535 / 562: overloaded "KafkaStreams#metadataForKey" and 
> "KafkaStreams#store" (2.5)
> And here's a list of already filed JIRAs for removing deprecated APIs
> * KAFKA-10434
> * KAFKA-7785



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12391) Add an option to store arbitrary metadata to a SourceRecord

2021-03-04 Thread Chris Egerton (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17295445#comment-17295445
 ] 

Chris Egerton commented on KAFKA-12391:
---

SMTs shouldn't interfere with option 2; both times {{SourceTask::commitRecord}} 
is invoked, it's done with the pre-transform record: 
[https://github.com/apache/kafka/blob/be1476869fc93553b3099d387d26bfd0092a9d65/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSourceTask.java#L382,]
 
[https://github.com/apache/kafka/blob/be1476869fc93553b3099d387d26bfd0092a9d65/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSourceTask.java#L345.|https://github.com/apache/kafka/blob/be1476869fc93553b3099d387d26bfd0092a9d65/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSourceTask.java#L345]

> Add an option to store arbitrary metadata to a SourceRecord
> ---
>
> Key: KAFKA-12391
> URL: https://issues.apache.org/jira/browse/KAFKA-12391
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Luca Burgazzoli
>Priority: Minor
>
> When writing Source Connectors for Kafka, it may be required to perform some 
> additional house cleaning when an record has been acknowledged by the Kafka 
> broker and as today, it is possible to set up an hook by overriding 
> SourceTask.commitRecord(SourceRecord).
> This works fine in most of the cases but to make it easy for the source 
> connector to perform it's internal house keeping, it would be nice to have an 
> option to set some additional metadata to the SourceRecord without having 
> impacts to the Record sent to the Kafka Broker, something like:
> {code:java}
> class SourceRecord {
> public SourceRecord(
> ...,
> Map attributes) {
> ...
> this.attributes = attributes;
> }
> Map attributes() { 
> return attributes;
> }
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] jsancio commented on a change in pull request #10253: KAFKA-12376: Apply atomic append to the log

2021-03-04 Thread GitBox


jsancio commented on a change in pull request #10253:
URL: https://github.com/apache/kafka/pull/10253#discussion_r587664832



##
File path: 
metadata/src/test/java/org/apache/kafka/controller/FeatureControlManagerTest.java
##
@@ -101,32 +101,55 @@ public void testUpdateFeaturesErrorCases() {
 SnapshotRegistry snapshotRegistry = new SnapshotRegistry(new 
LogContext());
 FeatureControlManager manager = new FeatureControlManager(
 rangeMap("foo", 1, 5, "bar", 1, 2), snapshotRegistry);
-assertEquals(new ControllerResult<>(Collections.
-singletonMap("foo", new ApiError(Errors.INVALID_UPDATE_VERSION,
-"Broker 5 does not support the given feature range."))),
-manager.updateFeatures(rangeMap("foo", 1, 3),
+
+assertEquals(
+ControllerResult.of(
+Collections.emptyList(),
+Collections.singletonMap(
+"foo",
+new ApiError(
+Errors.INVALID_UPDATE_VERSION,
+"Broker 5 does not support the given feature range."
+)
+)
+),
+manager.updateFeatures(
+rangeMap("foo", 1, 3),
 new HashSet<>(Arrays.asList("foo")),

Review comment:
   Yep. Fixed throughout the file.

##
File path: 
metadata/src/main/java/org/apache/kafka/controller/FeatureControlManager.java
##
@@ -69,7 +69,12 @@
 results.put(entry.getKey(), updateFeature(entry.getKey(), 
entry.getValue(),
 downgradeables.contains(entry.getKey()), brokerFeatures, 
records));
 }
-return new ControllerResult<>(records, results);
+
+if (records.isEmpty()) {

Review comment:
   I don't have a good reason. I felt it was consistent that an empty set 
of records was by definition not atomic. I removed the conditional and always 
create an atomic `ControllerResult`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (KAFKA-12419) Remove Deprecated APIs of Kafka Streams in 3.0

2021-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-12419:

Priority: Blocker  (was: Major)

> Remove Deprecated APIs of Kafka Streams in 3.0
> --
>
> Key: KAFKA-12419
> URL: https://issues.apache.org/jira/browse/KAFKA-12419
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams, streams-test-utils
>Reporter: Guozhang Wang
>Priority: Blocker
> Fix For: 3.0.0
>
>
> Here's a list of deprecated APIs that we have accumulated in the past, we can 
> consider removing them in 3.0:
> * KIP-198: "--zookeeper" flag from StreamsResetter (1.0)
> * KIP-171: "–execute" flag from StreamsResetter (1.1)
> * KIP-233: overloaded "StreamsBuilder#addGlobalStore" (1.1)
> * KIP-251: overloaded "ProcessorContext#forward" (2.0)
> * KIP-276: "StreamsConfig#getConsumerConfig" (2.0)
> * KIP-319: "WindowBytesStoreSupplier#segments" (2.1)
> * KIP-321: "TopologyDescription.Source#topics" (2.1)
> * KIP-328: "Windows#until/segmentInterval/maintainMS" (2.1)
> * KIP-358: "Windows/Materialized" overloaded functions with `long` (2.1)
> * KIP-365/366: Implicit Scala Apis (2.1)
> * KIP-372: overloaded "KStream#groupBy" (2.1)
> * KIP-307: "Joined#named" (2.3)
> * KIP-345: Broker config "group.initial.rebalance.delay.ms" (2.3)
> * KIP-429: "PartitionAssignor" interface (2.4)
> * KIP-470: "TopologyTestDriver#pipeInput" (2.4)
> * KIP-476: overloaded "KafkaClientSupplier#getAdminClient" (2.4)
> * KIP-479: overloaded "KStream#join" (2.4)
> * KIP-530: old "UsePreviousTimeOnInvalidTimeStamp" (2.5)
> * KIP-535 / 562: overloaded "KafkaStreams#metadataForKey" and 
> "KafkaStreams#store" (2.5)
> And here's a list of already filed JIRAs for removing deprecated APIs
> * KAFKA-10434
> * KAFKA-7785



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-12419) Remove Deprecated APIs of Kafka Streams in 3.0

2021-03-04 Thread Matthias J. Sax (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17295451#comment-17295451
 ] 

Matthias J. Sax commented on KAFKA-12419:
-

KIP-198 is already tracked via KAFKA-7606, and KIP-319/328 via KAFKA-7106)

> Remove Deprecated APIs of Kafka Streams in 3.0
> --
>
> Key: KAFKA-12419
> URL: https://issues.apache.org/jira/browse/KAFKA-12419
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams, streams-test-utils
>Reporter: Guozhang Wang
>Priority: Major
> Fix For: 3.0.0
>
>
> Here's a list of deprecated APIs that we have accumulated in the past, we can 
> consider removing them in 3.0:
> * KIP-198: "--zookeeper" flag from StreamsResetter (1.0)
> * KIP-171: "–execute" flag from StreamsResetter (1.1)
> * KIP-233: overloaded "StreamsBuilder#addGlobalStore" (1.1)
> * KIP-251: overloaded "ProcessorContext#forward" (2.0)
> * KIP-276: "StreamsConfig#getConsumerConfig" (2.0)
> * KIP-319: "WindowBytesStoreSupplier#segments" (2.1)
> * KIP-321: "TopologyDescription.Source#topics" (2.1)
> * KIP-328: "Windows#until/segmentInterval/maintainMS" (2.1)
> * KIP-358: "Windows/Materialized" overloaded functions with `long` (2.1)
> * KIP-365/366: Implicit Scala Apis (2.1)
> * KIP-372: overloaded "KStream#groupBy" (2.1)
> * KIP-307: "Joined#named" (2.3)
> * KIP-345: Broker config "group.initial.rebalance.delay.ms" (2.3)
> * KIP-429: "PartitionAssignor" interface (2.4)
> * KIP-470: "TopologyTestDriver#pipeInput" (2.4)
> * KIP-476: overloaded "KafkaClientSupplier#getAdminClient" (2.4)
> * KIP-479: overloaded "KStream#join" (2.4)
> * KIP-530: old "UsePreviousTimeOnInvalidTimeStamp" (2.5)
> * KIP-535 / 562: overloaded "KafkaStreams#metadataForKey" and 
> "KafkaStreams#store" (2.5)
> And here's a list of already filed JIRAs for removing deprecated APIs
> * KAFKA-10434
> * KAFKA-7785



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] hachikuji commented on a change in pull request #10184: MINOR: enable topic deletion in the KIP-500 controller

2021-03-04 Thread GitBox


hachikuji commented on a change in pull request #10184:
URL: https://github.com/apache/kafka/pull/10184#discussion_r587670430



##
File path: core/src/test/scala/unit/kafka/server/ControllerApisTest.scala
##
@@ -142,6 +148,199 @@ class ControllerApisTest {
   brokerRegistrationResponse.errorCounts().asScala)
   }
 
+  @Test
+  def testDeleteTopicsByName(): Unit = {
+val fooId = Uuid.fromString("vZKYST0pSA2HO5x_6hoO2Q")
+val controller = new MockController.Builder().newInitialTopic("foo", 
fooId).build()
+val controllerApis = createControllerApis(None, controller)
+val request = new DeleteTopicsRequestData().setTopicNames(
+  util.Arrays.asList("foo", "bar", "quux", "quux"))
+val expectedResponse = Set(new DeletableTopicResult().setName("quux").
+setErrorCode(Errors.INVALID_REQUEST.code()).
+setErrorMessage("Duplicate topic name."),
+  new DeletableTopicResult().setName("bar").
+setErrorCode(Errors.UNKNOWN_TOPIC_OR_PARTITION.code()).
+setErrorMessage("This server does not host this topic-partition."),
+  new DeletableTopicResult().setName("foo").setTopicId(fooId))
+assertEquals(expectedResponse, controllerApis.deleteTopics(request,
+  ApiKeys.DELETE_TOPICS.latestVersion().toInt,
+  true,
+  _ => Set.empty,
+  _ => Set.empty).asScala.toSet)
+  }
+
+  @Test
+  def testDeleteTopicsById(): Unit = {
+val fooId = Uuid.fromString("vZKYST0pSA2HO5x_6hoO2Q")
+val barId = Uuid.fromString("VlFu5c51ToiNx64wtwkhQw")
+val quuxId = Uuid.fromString("ObXkLhL_S5W62FAE67U3MQ")
+val controller = new MockController.Builder().newInitialTopic("foo", 
fooId).build()
+val controllerApis = createControllerApis(None, controller)
+val request = new DeleteTopicsRequestData()
+request.topics().add(new 
DeleteTopicState().setName(null).setTopicId(fooId))
+request.topics().add(new 
DeleteTopicState().setName(null).setTopicId(barId))
+request.topics().add(new 
DeleteTopicState().setName(null).setTopicId(quuxId))
+request.topics().add(new 
DeleteTopicState().setName(null).setTopicId(quuxId))
+val response = Set(new 
DeletableTopicResult().setName(null).setTopicId(quuxId).
+setErrorCode(Errors.INVALID_REQUEST.code()).
+setErrorMessage("Duplicate topic id."),
+  new DeletableTopicResult().setName(null).setTopicId(barId).
+setErrorCode(Errors.UNKNOWN_TOPIC_ID.code()).
+setErrorMessage("This server does not host this topic ID."),
+  new DeletableTopicResult().setName("foo").setTopicId(fooId))
+assertEquals(response, controllerApis.deleteTopics(request,
+  ApiKeys.DELETE_TOPICS.latestVersion().toInt,
+  true,
+  _ => Set.empty,
+  _ => Set.empty).asScala.toSet)
+  }
+
+  @Test
+  def testInvalidDeleteTopicsRequest(): Unit = {
+val fooId = Uuid.fromString("vZKYST0pSA2HO5x_6hoO2Q")
+val barId = Uuid.fromString("VlFu5c51ToiNx64wtwkhQw")
+val bazId = Uuid.fromString("YOS4oQ3UT9eSAZahN1ysSA")
+val controller = new MockController.Builder().
+  newInitialTopic("foo", fooId).
+  newInitialTopic("bar", barId).build()
+val controllerApis = createControllerApis(None, controller)
+val request = new DeleteTopicsRequestData()
+request.topics().add(new 
DeleteTopicState().setName(null).setTopicId(ZERO_UUID))
+request.topics().add(new 
DeleteTopicState().setName("foo").setTopicId(fooId))
+request.topics().add(new 
DeleteTopicState().setName("bar").setTopicId(ZERO_UUID))
+request.topics().add(new 
DeleteTopicState().setName(null).setTopicId(barId))
+request.topics().add(new 
DeleteTopicState().setName("quux").setTopicId(ZERO_UUID))
+request.topics().add(new 
DeleteTopicState().setName("quux").setTopicId(ZERO_UUID))
+request.topics().add(new 
DeleteTopicState().setName("quux").setTopicId(ZERO_UUID))
+request.topics().add(new 
DeleteTopicState().setName(null).setTopicId(bazId))
+request.topics().add(new 
DeleteTopicState().setName(null).setTopicId(bazId))
+request.topics().add(new 
DeleteTopicState().setName(null).setTopicId(bazId))
+val response = Set(new 
DeletableTopicResult().setName(null).setTopicId(ZERO_UUID).
+setErrorCode(Errors.INVALID_REQUEST.code()).
+setErrorMessage("Neither topic name nor id were specified."),
+  new DeletableTopicResult().setName("foo").setTopicId(fooId).
+setErrorCode(Errors.INVALID_REQUEST.code()).
+setErrorMessage("You may not specify both topic name and topic id."),
+  new DeletableTopicResult().setName("bar").setTopicId(barId).
+setErrorCode(Errors.INVALID_REQUEST.code()).
+setErrorMessage("The provided topic name maps to an ID that was 
already supplied."),
+  new DeletableTopicResult().setName("quux").setTopicId(ZERO_UUID).
+setErrorCode(Errors.INVALID_REQUEST.code()).
+setErrorMessage("Duplicate topic name."),
+  new DeletableTopicResult().setName(null).setTopicId(ba

[jira] [Commented] (KAFKA-12391) Add an option to store arbitrary metadata to a SourceRecord

2021-03-04 Thread Luca Burgazzoli (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-12391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17295455#comment-17295455
 ] 

Luca Burgazzoli commented on KAFKA-12391:
-

Ah ok, good to know so I'll do some test and report back

> Add an option to store arbitrary metadata to a SourceRecord
> ---
>
> Key: KAFKA-12391
> URL: https://issues.apache.org/jira/browse/KAFKA-12391
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Luca Burgazzoli
>Priority: Minor
>
> When writing Source Connectors for Kafka, it may be required to perform some 
> additional house cleaning when an record has been acknowledged by the Kafka 
> broker and as today, it is possible to set up an hook by overriding 
> SourceTask.commitRecord(SourceRecord).
> This works fine in most of the cases but to make it easy for the source 
> connector to perform it's internal house keeping, it would be nice to have an 
> option to set some additional metadata to the SourceRecord without having 
> impacts to the Record sent to the Kafka Broker, something like:
> {code:java}
> class SourceRecord {
> public SourceRecord(
> ...,
> Map attributes) {
> ...
> this.attributes = attributes;
> }
> Map attributes() { 
> return attributes;
> }
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] hachikuji commented on a change in pull request #10252: KAFKA-12403; Ensure local state deleted on `RemoveTopicRecord` received

2021-03-04 Thread GitBox


hachikuji commented on a change in pull request #10252:
URL: https://github.com/apache/kafka/pull/10252#discussion_r587673099



##
File path: 
metadata/src/main/java/org/apache/kafka/controller/ReplicationControlManager.java
##
@@ -483,7 +475,7 @@ private ApiError createTopic(CreatableTopic topic,
 " times: " + e.getMessage());
 }
 }
-Uuid topicId = new Uuid(random.nextLong(), random.nextLong());
+Uuid topicId = Uuid.randomUuid();

Review comment:
   Yeah, I'm aware. But it felt like something that had to be done. The 
rate of topic creation is typically not high anyway.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] ijuma commented on a change in pull request #10252: KAFKA-12403; Ensure local state deleted on `RemoveTopicRecord` received

2021-03-04 Thread GitBox


ijuma commented on a change in pull request #10252:
URL: https://github.com/apache/kafka/pull/10252#discussion_r587677099



##
File path: 
metadata/src/main/java/org/apache/kafka/controller/ReplicationControlManager.java
##
@@ -483,7 +475,7 @@ private ApiError createTopic(CreatableTopic topic,
 " times: " + e.getMessage());
 }
 }
-Uuid topicId = new Uuid(random.nextLong(), random.nextLong());
+Uuid topicId = Uuid.randomUuid();

Review comment:
   Can we note this in the PR description?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (KAFKA-12419) Remove Deprecated APIs of Kafka Streams in 3.0

2021-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-12419:

Labels: needs-kip  (was: )

> Remove Deprecated APIs of Kafka Streams in 3.0
> --
>
> Key: KAFKA-12419
> URL: https://issues.apache.org/jira/browse/KAFKA-12419
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams, streams-test-utils
>Reporter: Guozhang Wang
>Priority: Blocker
>  Labels: needs-kip
> Fix For: 3.0.0
>
>
> Here's a list of deprecated APIs that we have accumulated in the past, we can 
> consider removing them in 3.0:
> * KIP-198: "--zookeeper" flag from StreamsResetter (1.0)
> * KIP-171: "–execute" flag from StreamsResetter (1.1)
> * KIP-233: overloaded "StreamsBuilder#addGlobalStore" (1.1)
> * KIP-251: overloaded "ProcessorContext#forward" (2.0)
> * KIP-276: "StreamsConfig#getConsumerConfig" (2.0)
> * KIP-319: "WindowBytesStoreSupplier#segments" (2.1)
> * KIP-321: "TopologyDescription.Source#topics" (2.1)
> * KIP-328: "Windows#until/segmentInterval/maintainMS" (2.1)
> * KIP-358: "Windows/Materialized" overloaded functions with `long` (2.1)
> * KIP-365/366: Implicit Scala Apis (2.1)
> * KIP-372: overloaded "KStream#groupBy" (2.1)
> * KIP-307: "Joined#named" (2.3)
> * KIP-345: Broker config "group.initial.rebalance.delay.ms" (2.3)
> * KIP-429: "PartitionAssignor" interface (2.4)
> * KIP-470: "TopologyTestDriver#pipeInput" (2.4)
> * KIP-476: overloaded "KafkaClientSupplier#getAdminClient" (2.4)
> * KIP-479: overloaded "KStream#join" (2.4)
> * KIP-530: old "UsePreviousTimeOnInvalidTimeStamp" (2.5)
> * KIP-535 / 562: overloaded "KafkaStreams#metadataForKey" and 
> "KafkaStreams#store" (2.5)
> And here's a list of already filed JIRAs for removing deprecated APIs
> * KAFKA-10434
> * KAFKA-7785



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] hachikuji commented on pull request #10256: MINOR: Raft max batch size needs to propagate to log config

2021-03-04 Thread GitBox


hachikuji commented on pull request #10256:
URL: https://github.com/apache/kafka/pull/10256#issuecomment-790833577


   @ijuma Yes, probably. I'm working on it.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] mumrah merged pull request #10253: KAFKA-12376: Apply atomic append to the log

2021-03-04 Thread GitBox


mumrah merged pull request #10253:
URL: https://github.com/apache/kafka/pull/10253


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] jolshan commented on pull request #9590: KAFKA-7556: KafkaConsumer.beginningOffsets does not return actual first offsets

2021-03-04 Thread GitBox


jolshan commented on pull request #9590:
URL: https://github.com/apache/kafka/pull/9590#issuecomment-790855545


   @hachikuji In the example above, we could also have something like `35.swap` 
too, right, since we are updating the baseOffset of each segment. (Which means 
we will need to look at the next segment's offsets as well) Can we also assume 
that all the log will be replaced? That is, `15.swap` is the lowest swap file, 
so it must cover `0.log` 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] hachikuji commented on pull request #10184: MINOR: enable topic deletion in the KIP-500 controller

2021-03-04 Thread GitBox


hachikuji commented on pull request #10184:
URL: https://github.com/apache/kafka/pull/10184#issuecomment-790862249


   Thanks for reviews. I will merge this on behalf of @cmccabe to trunk and 2.8.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] ijuma commented on pull request #10203: KAFKA-12415 Prepare for Gradle 7.0 and restrict transitive scope for non api dependencies

2021-03-04 Thread GitBox


ijuma commented on pull request #10203:
URL: https://github.com/apache/kafka/pull/10203#issuecomment-790865459


   No test failures, but one of the jobs was killed in the last run:
   
https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-10203/19/testReport/
   
   The previous runs confirm that it's a flaky issue unrelated to this PR 
(there are no test changes here).



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] ijuma merged pull request #10203: KAFKA-12415 Prepare for Gradle 7.0 and restrict transitive scope for non api dependencies

2021-03-04 Thread GitBox


ijuma merged pull request #10203:
URL: https://github.com/apache/kafka/pull/10203


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (KAFKA-12411) Group Coordinator Followers failing with OutOfOrderOffsetException

2021-03-04 Thread Sandeep (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep resolved KAFKA-12411.
-
Resolution: Duplicate

> Group Coordinator Followers failing with OutOfOrderOffsetException
> --
>
> Key: KAFKA-12411
> URL: https://issues.apache.org/jira/browse/KAFKA-12411
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Sandeep
>Priority: Major
> Attachments: replica_logs
>
>
> Post group coordinator failure and new leader election the followers are 
> failing with OffsetsOutOfOrderException. 
> clearing follower log directory and restarting did not help.
>  
> Broker Version: 2.6.0
> Zookeeper: 3.0.7
> PFA for follower logs



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12415) Prepare for Gradle 7.0 and restrict transitive scope for non api dependencies

2021-03-04 Thread Ismael Juma (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-12415.
-
Resolution: Fixed

> Prepare for Gradle 7.0 and restrict transitive scope for non api dependencies
> -
>
> Key: KAFKA-12415
> URL: https://issues.apache.org/jira/browse/KAFKA-12415
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Major
> Fix For: 3.0.0
>
>
> Also required:
>  * Migrate from maven plugin to maven-publish plugin
>  * Migrate from java to java-library plugin



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12416) Group Coordinator followers are failing with OffsetsOutOfOrderException

2021-03-04 Thread Sandeep (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep resolved KAFKA-12416.
-
Resolution: Duplicate

> Group Coordinator followers are failing with OffsetsOutOfOrderException
> ---
>
> Key: KAFKA-12416
> URL: https://issues.apache.org/jira/browse/KAFKA-12416
> Project: Kafka
>  Issue Type: Bug
>  Components: replication
>Reporter: Sandeep
>Priority: Major
> Attachments: replica_logs
>
>
> Upon failure of group coordinator, the followers of newly elected group 
> coordinator are failing with  OffsetsOutOfOrderException
> Kafka Broker Version: 2.6.0
> Zookeeper version: 3.0.7
> consumer API: 1.6.0
> producer: libdirkafka: 0.9.1
> PFA: [^replica_logs]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12413) Group Coordinator followers are failing with OffsetsOutOfOrderException

2021-03-04 Thread Sandeep (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep resolved KAFKA-12413.
-
Resolution: Duplicate

> Group Coordinator followers are failing with OffsetsOutOfOrderException
> ---
>
> Key: KAFKA-12413
> URL: https://issues.apache.org/jira/browse/KAFKA-12413
> Project: Kafka
>  Issue Type: Bug
>Reporter: Sandeep
>Priority: Major
> Attachments: replica_logs
>
>
> Upon failure of group coordinator, the followers of newly elected group 
> coordinator are failing with  OffsetsOutOfOrderException
> Kafka Broker Version: 2.6.0
> Zookeeper version: 3.0.7
> consumer API: 1.6.0
> producer: libdirkafka: 0.9.1
> PFA: follower logs



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12414) Group Coordinator followers are failing with OffsetsOutOfOrderException

2021-03-04 Thread Sandeep (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep resolved KAFKA-12414.
-
Resolution: Duplicate

> Group Coordinator followers are failing with OffsetsOutOfOrderException
> ---
>
> Key: KAFKA-12414
> URL: https://issues.apache.org/jira/browse/KAFKA-12414
> Project: Kafka
>  Issue Type: Bug
>  Components: replication
>Affects Versions: 2.6.0
>Reporter: Sandeep
>Priority: Major
> Attachments: replica_logs
>
>
> Upon failure of group coordinator, the followers of newly elected group 
> coordinator are failing with  OffsetsOutOfOrderException
> Kafka Broker Version: 2.6.0
> Zookeeper version: 3.0.7
> consumer API: 1.6.0
> producer: libdirkafka: 0.9.1
> PFA: follower logs



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] hachikuji merged pull request #10184: MINOR: enable topic deletion in the KIP-500 controller

2021-03-04 Thread GitBox


hachikuji merged pull request #10184:
URL: https://github.com/apache/kafka/pull/10184


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (KAFKA-9831) Failing test: EosIntegrationTest.shouldNotViolateEosIfOneTaskFailsWithState[exactly_once_beta]

2021-03-04 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reassigned KAFKA-9831:
--

Assignee: Luke Chen  (was: Matthias J. Sax)

> Failing test: 
> EosIntegrationTest.shouldNotViolateEosIfOneTaskFailsWithState[exactly_once_beta]
> --
>
> Key: KAFKA-9831
> URL: https://issues.apache.org/jira/browse/KAFKA-9831
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Reporter: John Roesler
>Assignee: Luke Chen
>Priority: Major
> Attachments: one.stdout.txt, two.stdout.txt
>
>
> I've seen this fail twice in a row on the same build, but with different 
> errors. Stacktraces follow; stdout is attached.
> One:
> {noformat}
> java.lang.AssertionError: Did not receive all 40 records from topic 
> singlePartitionOutputTopic within 6 ms
> Expected: is a value equal to or greater than <40>
>  but: <39> was less than <40>
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
>   at 
> org.apache.kafka.streams.integration.utils.IntegrationTestUtils.lambda$waitUntilMinKeyValueRecordsReceived$1(IntegrationTestUtils.java:517)
>   at 
> org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:415)
>   at 
> org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:383)
>   at 
> org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinKeyValueRecordsReceived(IntegrationTestUtils.java:513)
>   at 
> org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinKeyValueRecordsReceived(IntegrationTestUtils.java:491)
>   at 
> org.apache.kafka.streams.integration.EosIntegrationTest.readResult(EosIntegrationTest.java:766)
>   at 
> org.apache.kafka.streams.integration.EosIntegrationTest.shouldNotViolateEosIfOneTaskFailsWithState(EosIntegrationTest.java:473)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>   at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>   at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>   at org.junit.runners.Suite.runChild(Suite.java:128)
>   at org.junit.runners.Suite.runChild(Suite.java:27)
>   at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>   at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
>   at 
> org.gradle.api.intern

[jira] [Commented] (KAFKA-9831) Failing test: EosIntegrationTest.shouldNotViolateEosIfOneTaskFailsWithState[exactly_once_beta]

2021-03-04 Thread Matthias J. Sax (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17295547#comment-17295547
 ] 

Matthias J. Sax commented on KAFKA-9831:


Absolutely! Reassigning.

> Failing test: 
> EosIntegrationTest.shouldNotViolateEosIfOneTaskFailsWithState[exactly_once_beta]
> --
>
> Key: KAFKA-9831
> URL: https://issues.apache.org/jira/browse/KAFKA-9831
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Reporter: John Roesler
>Assignee: Matthias J. Sax
>Priority: Major
> Attachments: one.stdout.txt, two.stdout.txt
>
>
> I've seen this fail twice in a row on the same build, but with different 
> errors. Stacktraces follow; stdout is attached.
> One:
> {noformat}
> java.lang.AssertionError: Did not receive all 40 records from topic 
> singlePartitionOutputTopic within 6 ms
> Expected: is a value equal to or greater than <40>
>  but: <39> was less than <40>
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
>   at 
> org.apache.kafka.streams.integration.utils.IntegrationTestUtils.lambda$waitUntilMinKeyValueRecordsReceived$1(IntegrationTestUtils.java:517)
>   at 
> org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:415)
>   at 
> org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:383)
>   at 
> org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinKeyValueRecordsReceived(IntegrationTestUtils.java:513)
>   at 
> org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinKeyValueRecordsReceived(IntegrationTestUtils.java:491)
>   at 
> org.apache.kafka.streams.integration.EosIntegrationTest.readResult(EosIntegrationTest.java:766)
>   at 
> org.apache.kafka.streams.integration.EosIntegrationTest.shouldNotViolateEosIfOneTaskFailsWithState(EosIntegrationTest.java:473)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>   at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>   at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>   at org.junit.runners.Suite.runChild(Suite.java:128)
>   at org.junit.runners.Suite.runChild(Suite.java:27)
>   at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>   at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
>

[GitHub] [kafka] mumrah commented on a change in pull request #10256: MINOR: Raft max batch size needs to propagate to log config

2021-03-04 Thread GitBox


mumrah commented on a change in pull request #10256:
URL: https://github.com/apache/kafka/pull/10256#discussion_r587767322



##
File path: core/src/main/scala/kafka/raft/KafkaMetadataLog.scala
##
@@ -297,10 +288,43 @@ final class KafkaMetadataLog private (
 }
 
 object KafkaMetadataLog {
+
   def apply(
+topicPartition: TopicPartition,
+dataDir: File,
+time: Time,
+scheduler: Scheduler,
+maxBatchSizeInBytes: Int,
+maxFetchSizeInBytes: Int
+  ): KafkaMetadataLog = {
+val props = new Properties()
+props.put(LogConfig.MaxMessageBytesProp, maxBatchSizeInBytes.toString)
+props.put(LogConfig.MessageFormatVersionProp, 
ApiVersion.latestVersion.toString)
+
+LogConfig.validateValues(props)
+val defaultLogConfig = LogConfig(props)
+
+val log = Log(

Review comment:
   👍  Makes sense to move this here since KafkaMetadataLog fully owns the 
Log's lifecycle

##
File path: core/src/main/scala/kafka/raft/KafkaMetadataLog.scala
##
@@ -297,10 +288,43 @@ final class KafkaMetadataLog private (
 }
 
 object KafkaMetadataLog {
+
   def apply(
+topicPartition: TopicPartition,
+dataDir: File,
+time: Time,
+scheduler: Scheduler,
+maxBatchSizeInBytes: Int,
+maxFetchSizeInBytes: Int
+  ): KafkaMetadataLog = {
+val props = new Properties()
+props.put(LogConfig.MaxMessageBytesProp, maxBatchSizeInBytes.toString)
+props.put(LogConfig.MessageFormatVersionProp, 
ApiVersion.latestVersion.toString)
+
+LogConfig.validateValues(props)
+val defaultLogConfig = LogConfig(props)
+
+val log = Log(
+  dir = dataDir,
+  config = defaultLogConfig,
+  logStartOffset = 0L,
+  recoveryPoint = 0L,
+  scheduler = scheduler,
+  brokerTopicStats = new BrokerTopicStats,
+  time = time,
+  maxProducerIdExpirationMs = Int.MaxValue,
+  producerIdExpirationCheckIntervalMs = Int.MaxValue,
+  logDirFailureChannel = new LogDirFailureChannel(5),
+  keepPartitionMetadataFile = false
+)
+
+KafkaMetadataLog(log, topicPartition, maxFetchSizeInBytes)
+  }
+
+  private def apply(

Review comment:
   Do we still need this additional private factory? Is it used by tests or 
something? Looks like it's just called from a few lines above, maybe we can 
just combine the two?

##
File path: core/src/test/scala/kafka/raft/KafkaMetadataLogTest.scala
##
@@ -310,44 +288,109 @@ final class KafkaMetadataLogTest {
 
 log.close()
 
-val secondLog = buildMetadataLog(tempDir, mockTime, topicPartition)
+val secondLog = buildMetadataLog(tempDir, mockTime)
 
 assertEquals(snapshotId, secondLog.latestSnapshotId.get)
 assertEquals(snapshotId.offset, secondLog.startOffset)
 assertEquals(snapshotId.epoch, secondLog.lastFetchedEpoch)
 assertEquals(snapshotId.offset, secondLog.endOffset().offset)
 assertEquals(snapshotId.offset, secondLog.highWatermark.offset)
   }
+
+  @Test
+  def testMaxBatchSize(): Unit = {
+val leaderEpoch = 5
+val maxBatchSizeInBytes = 16384
+val recordSize = 64

Review comment:
   Does it matter that maxBatchSizeInBytes is a multiple of the record 
size? Could you have done 
   
   > buildFullBatch(leaderEpoch, recordSize, maxBatchSizeInBytes + 1) 
   
   to get the same RecordTooLargeException?

##
File path: core/src/main/scala/kafka/raft/RaftManager.scala
##
@@ -220,25 +220,14 @@ class KafkaRaftManager[T](
   }
 
   private def buildMetadataLog(): KafkaMetadataLog = {
-val defaultProps = LogConfig.extractLogConfigMap(config)

Review comment:
   I see we are no longer calling this, but rather explicitly populating 
the log properties. Are there other log configs we might want to expose? 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] hachikuji commented on a change in pull request #10256: MINOR: Raft max batch size needs to propagate to log config

2021-03-04 Thread GitBox


hachikuji commented on a change in pull request #10256:
URL: https://github.com/apache/kafka/pull/10256#discussion_r587774537



##
File path: core/src/main/scala/kafka/raft/KafkaMetadataLog.scala
##
@@ -297,10 +288,43 @@ final class KafkaMetadataLog private (
 }
 
 object KafkaMetadataLog {
+
   def apply(
+topicPartition: TopicPartition,
+dataDir: File,
+time: Time,
+scheduler: Scheduler,
+maxBatchSizeInBytes: Int,
+maxFetchSizeInBytes: Int
+  ): KafkaMetadataLog = {
+val props = new Properties()
+props.put(LogConfig.MaxMessageBytesProp, maxBatchSizeInBytes.toString)
+props.put(LogConfig.MessageFormatVersionProp, 
ApiVersion.latestVersion.toString)
+
+LogConfig.validateValues(props)
+val defaultLogConfig = LogConfig(props)
+
+val log = Log(
+  dir = dataDir,
+  config = defaultLogConfig,
+  logStartOffset = 0L,
+  recoveryPoint = 0L,
+  scheduler = scheduler,
+  brokerTopicStats = new BrokerTopicStats,
+  time = time,
+  maxProducerIdExpirationMs = Int.MaxValue,
+  producerIdExpirationCheckIntervalMs = Int.MaxValue,
+  logDirFailureChannel = new LogDirFailureChannel(5),
+  keepPartitionMetadataFile = false
+)
+
+KafkaMetadataLog(log, topicPartition, maxFetchSizeInBytes)
+  }
+
+  private def apply(

Review comment:
   Yeah, I got a little lazy here. Let me do the snapshot stuff in a helper 
method.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] hachikuji commented on a change in pull request #10256: MINOR: Raft max batch size needs to propagate to log config

2021-03-04 Thread GitBox


hachikuji commented on a change in pull request #10256:
URL: https://github.com/apache/kafka/pull/10256#discussion_r58995



##
File path: core/src/test/scala/kafka/raft/KafkaMetadataLogTest.scala
##
@@ -310,44 +288,109 @@ final class KafkaMetadataLogTest {
 
 log.close()
 
-val secondLog = buildMetadataLog(tempDir, mockTime, topicPartition)
+val secondLog = buildMetadataLog(tempDir, mockTime)
 
 assertEquals(snapshotId, secondLog.latestSnapshotId.get)
 assertEquals(snapshotId.offset, secondLog.startOffset)
 assertEquals(snapshotId.epoch, secondLog.lastFetchedEpoch)
 assertEquals(snapshotId.offset, secondLog.endOffset().offset)
 assertEquals(snapshotId.offset, secondLog.highWatermark.offset)
   }
+
+  @Test
+  def testMaxBatchSize(): Unit = {
+val leaderEpoch = 5
+val maxBatchSizeInBytes = 16384
+val recordSize = 64

Review comment:
   It's a little tricky to get the alignment perfect after accounting for 
the overhead of the batch itself. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] hachikuji opened a new pull request #10264: HOTFIX: Controller topic deletion should be atomic

2021-03-04 Thread GitBox


hachikuji opened a new pull request #10264:
URL: https://github.com/apache/kafka/pull/10264


   We merged https://github.com/apache/kafka/pull/10253 and 
https://github.com/apache/kafka/pull/10184 at about the same time, which caused 
a build error. Topic deletions should be atomic.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] hachikuji commented on pull request #10264: HOTFIX: Controller topic deletion should be atomic

2021-03-04 Thread GitBox


hachikuji commented on pull request #10264:
URL: https://github.com/apache/kafka/pull/10264#issuecomment-790911623


   I will merge to trunk. The metadata tests are passing locally.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] hachikuji merged pull request #10264: HOTFIX: Controller topic deletion should be atomic

2021-03-04 Thread GitBox


hachikuji merged pull request #10264:
URL: https://github.com/apache/kafka/pull/10264


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] jeqo opened a new pull request #10265: To record value

2021-03-04 Thread GitBox


jeqo opened a new pull request #10265:
URL: https://github.com/apache/kafka/pull/10265


   Draft implementation of 
[KIP-634](https://cwiki.apache.org/confluence/display/KAFKA/KIP-634%3A+Complementary+support+for+headers+and+record+metadata+in+Kafka+Streams+DSL)
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] hachikuji commented on a change in pull request #10256: MINOR: Raft max batch size needs to propagate to log config

2021-03-04 Thread GitBox


hachikuji commented on a change in pull request #10256:
URL: https://github.com/apache/kafka/pull/10256#discussion_r587803031



##
File path: core/src/main/scala/kafka/raft/RaftManager.scala
##
@@ -220,25 +220,14 @@ class KafkaRaftManager[T](
   }
 
   private def buildMetadataLog(): KafkaMetadataLog = {
-val defaultProps = LogConfig.extractLogConfigMap(config)

Review comment:
   It seemed best for now to not allow overrides since this is a new usage 
of `Log` and we haven't had time to understand the impact of all of the 
configurations for this usage. Most of them are probably safe, but others do 
not even make sense (e.g. the retention settings). Let me open a JIRA so that 
we can consider which configs we want to expose for the metadata log and how we 
want to expose them.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] hachikuji commented on a change in pull request #10256: MINOR: Raft max batch size needs to propagate to log config

2021-03-04 Thread GitBox


hachikuji commented on a change in pull request #10256:
URL: https://github.com/apache/kafka/pull/10256#discussion_r587809203



##
File path: core/src/test/scala/kafka/raft/KafkaMetadataLogTest.scala
##
@@ -310,44 +288,109 @@ final class KafkaMetadataLogTest {
 
 log.close()
 
-val secondLog = buildMetadataLog(tempDir, mockTime, topicPartition)
+val secondLog = buildMetadataLog(tempDir, mockTime)
 
 assertEquals(snapshotId, secondLog.latestSnapshotId.get)
 assertEquals(snapshotId.offset, secondLog.startOffset)
 assertEquals(snapshotId.epoch, secondLog.lastFetchedEpoch)
 assertEquals(snapshotId.offset, secondLog.endOffset().offset)
 assertEquals(snapshotId.offset, secondLog.highWatermark.offset)
   }
+
+  @Test
+  def testMaxBatchSize(): Unit = {
+val leaderEpoch = 5
+val maxBatchSizeInBytes = 16384
+val recordSize = 64

Review comment:
   I tried it out just to see if we got lucky on the alignment, but 
unfortunately we didn't.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (KAFKA-12420) Kafka network Selector class has many constructors; use a Builder pattern instead

2021-03-04 Thread Steve Rodrigues (Jira)
Steve Rodrigues created KAFKA-12420:
---

 Summary: Kafka network Selector class has many constructors; use a 
Builder pattern instead
 Key: KAFKA-12420
 URL: https://issues.apache.org/jira/browse/KAFKA-12420
 Project: Kafka
  Issue Type: Improvement
  Components: network
Affects Versions: 2.7.0
Reporter: Steve Rodrigues
Assignee: Steve Rodrigues


The Kafka network Selector has a myriad of constructor parameters and to deal 
with its multiple use cases this class has 6 distinct constructors taking up to 
12 parameters (or various combinations thereof). The proposal for this small 
task is to have a builder pattern to consolidate to a simple path going forward.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   >