[jira] [Updated] (KAFKA-10885) Refactor MemoryRecordsBuilderTest/MemoryRecordsTest to avoid a lot of (unnecessary) ignored test cases

2020-12-23 Thread GeordieMai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

GeordieMai updated KAFKA-10885:
---
Description: 
{quote}private void assumeAtLeastV2OrNotZstd(byte magic)
Unknown macro: \{ assumeTrue(compressionType != CompressionType.ZSTD || magic 
>= MAGIC_VALUE_V2); }{quote}
MemoryRecordsBuilderTest/MemoryRecordsTest use aforementioned method to avoid 
testing zstd on unsupported magic code. However, it produces some unnecessary 
ignored test cases. Personally, it could be separated to different test classes 
for each magic code as we do assign specify magic code to each test cases.
 
 

  was:
{quote}

private void assumeAtLeastV2OrNotZstd(byte magic) {
 assumeTrue(compressionType != CompressionType.ZSTD || magic >= MAGIC_VALUE_V2);
}

{quote}

MemoryRecordsBuilderTest/MemoryRecordsTest use aforementioned method to avoid 
testing zstd on unsupported magic code. However, it produces some unnecessary 
ignored test cases. Personally, it could be separated to different test classes 
for each magic code as we do assign specify magic code to each test cases.


> Refactor MemoryRecordsBuilderTest/MemoryRecordsTest to avoid a lot of 
> (unnecessary) ignored test cases
> --
>
> Key: KAFKA-10885
> URL: https://issues.apache.org/jira/browse/KAFKA-10885
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: GeordieMai
>Priority: Major
>  Labels: newbie
>
> {quote}private void assumeAtLeastV2OrNotZstd(byte magic)
> Unknown macro: \{ assumeTrue(compressionType != CompressionType.ZSTD || magic 
> >= MAGIC_VALUE_V2); }{quote}
> MemoryRecordsBuilderTest/MemoryRecordsTest use aforementioned method to avoid 
> testing zstd on unsupported magic code. However, it produces some unnecessary 
> ignored test cases. Personally, it could be separated to different test 
> classes for each magic code as we do assign specify magic code to each test 
> cases.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] g1geordie commented on a change in pull request #9778: KAFKA-10874 Fix flaky ClientQuotasRequestTest.testAlterIpQuotasRequest

2020-12-23 Thread GitBox


g1geordie commented on a change in pull request #9778:
URL: https://github.com/apache/kafka/pull/9778#discussion_r548427248



##
File path: core/src/test/scala/unit/kafka/server/ClientQuotasRequestTest.scala
##
@@ -212,7 +214,9 @@ class ClientQuotasRequestTest extends BaseRequestTest {
   InetAddress.getByName(unknownHost)
 else
   InetAddress.getByName(entityName)
-assertEquals(expectedMatches(entity), 
servers.head.socketServer.connectionQuotas.connectionRateForIp(entityIp), 0.01)
+TestUtils.retry(1L) {

Review comment:
   Yes sir 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] g1geordie opened a new pull request #9785: KAFKA-10887 Migrate log4j-appender module to JUnit 5

2020-12-23 Thread GitBox


g1geordie opened a new pull request #9785:
URL: https://github.com/apache/kafka/pull/9785


   Migrate log4j-appender module to JUnit 5



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] showuon commented on a change in pull request #9733: KAFKA-10017: fix 2 issues in EosBetaUpgradeIntegrationTest

2020-12-23 Thread GitBox


showuon commented on a change in pull request #9733:
URL: https://github.com/apache/kafka/pull/9733#discussion_r548399660



##
File path: 
streams/src/test/java/org/apache/kafka/streams/integration/EosBetaUpgradeIntegrationTest.java
##
@@ -246,18 +261,19 @@ public void shouldUpgradeFromEosAlphaToEosBeta() throws 
Exception {
 assignmentListener.waitForNextStableAssignment(MAX_WAIT_TIME_MS);
 waitForRunning(stateTransitions1);
 
-streams2Alpha = getKafkaStreams("appDir2", 
StreamsConfig.EXACTLY_ONCE);
+streams2Alpha = getKafkaStreams(APP_DIR_2, 
StreamsConfig.EXACTLY_ONCE);
 streams2Alpha.setStateListener(
 (newState, oldState) -> 
stateTransitions2.add(KeyValue.pair(oldState, newState))
 );
 stateTransitions1.clear();
 
-assignmentListener.prepareForRebalance();
+prevNumAssignments = assignmentListener.prepareForRebalance();
 streams2Alpha.cleanUp();
 streams2Alpha.start();
 assignmentListener.waitForNextStableAssignment(MAX_WAIT_TIME_MS);
+expectedNumAssignments = assignmentListener.numTotalAssignments() 
- prevNumAssignments;
+waitForNumRebalancingToRunning(stateTransitions2, 
expectedNumAssignments);
 waitForRunning(stateTransitions1);

Review comment:
   Thanks for your questions. Answer them below:
   > Also, you PR does not update all stages when we wait for state changes. 
Why? Would we not need to apply to logic each time?
   
   --> Yes, I should do that. What I did now is just focusing on the case: 1 
new started stream + 1 already running stream, which will have more failures 
here. But you're right, I should put the change in all stages.
   
   > If you reasoning is right, do we need to use 
waitForNumRebalancingToRunning here, too? If there are multiple rebalances, 
both clients would participate?
   
   --> Good question! I have explained in the previous comment 
(https://github.com/apache/kafka/pull/9733#issuecomment-748849742) though, I 
can explain again since I know you didn't understand exactly why I did this 
change before.
   
   I only `waitForNumRebalancingToRunning` for the new started stream only, not 
for the "already running stream" because I found sometimes if the stream runs 
fast enough, the "already running stream" might not have the expected number of 
`[REBALANCING -> RUNNING]` state transition. The reason is this line:
   ```
   [appDir2-StreamThread-1] State transition from PARTITIONS_ASSIGNED to 
PARTITIONS_ASSIGNED 
   ```
   Basically, The already running stream thread should have the state change: 
`[RUNNING to PARTITIONS_REVOKED]`, `[PARTITIONS_REVOKED to 
PARTITIONS_ASSIGNED](unstable)`, `[PARTITIONS_ASSIGNED to RUNNING]`, `[RUNNING 
to PARTITIONS_ASSIGNED](stable)`, `[PARTITIONS_ASSIGNED to RUNNING]`. Because 
it needs one more PARTITIONS_REVOKED step, it might be under 2 
PARTITIONS_ASSIGNED at the same time (no RUNNING in the middle). And that's why 
the stream client doesn't change to RUNNING as we expected.
   
   Does that make sense? 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] showuon commented on a change in pull request #9733: KAFKA-10017: fix 2 issues in EosBetaUpgradeIntegrationTest

2020-12-23 Thread GitBox


showuon commented on a change in pull request #9733:
URL: https://github.com/apache/kafka/pull/9733#discussion_r548399660



##
File path: 
streams/src/test/java/org/apache/kafka/streams/integration/EosBetaUpgradeIntegrationTest.java
##
@@ -246,18 +261,19 @@ public void shouldUpgradeFromEosAlphaToEosBeta() throws 
Exception {
 assignmentListener.waitForNextStableAssignment(MAX_WAIT_TIME_MS);
 waitForRunning(stateTransitions1);
 
-streams2Alpha = getKafkaStreams("appDir2", 
StreamsConfig.EXACTLY_ONCE);
+streams2Alpha = getKafkaStreams(APP_DIR_2, 
StreamsConfig.EXACTLY_ONCE);
 streams2Alpha.setStateListener(
 (newState, oldState) -> 
stateTransitions2.add(KeyValue.pair(oldState, newState))
 );
 stateTransitions1.clear();
 
-assignmentListener.prepareForRebalance();
+prevNumAssignments = assignmentListener.prepareForRebalance();
 streams2Alpha.cleanUp();
 streams2Alpha.start();
 assignmentListener.waitForNextStableAssignment(MAX_WAIT_TIME_MS);
+expectedNumAssignments = assignmentListener.numTotalAssignments() 
- prevNumAssignments;
+waitForNumRebalancingToRunning(stateTransitions2, 
expectedNumAssignments);
 waitForRunning(stateTransitions1);

Review comment:
   Thanks for your questions. Answer them below:
   > Also, you PR does not update all stages when we wait for state changes. 
Why? Would we not need to apply to logic each time?
   
   --> Yes, I should do that. What I did now is just focusing on the case: 1 
new started stream + 1 already running stream, which will have more failures 
here. But you're right, I should put the change in all stages.
   
   > If you reasoning is right, do we need to use 
waitForNumRebalancingToRunning here, too? If there are multiple rebalances, 
both clients would participate?
   
   --> Good question! I have explained in the previous comment 
(https://github.com/apache/kafka/pull/9733#issuecomment-748849742) though, I 
can explain again since I know you didn't understand exactly why I did this 
change before.
   
   I only `waitForNumRebalancingToRunning` for the new started stream only, not 
for the "already running stream" because I found sometimes if the stream runs 
fast enough, the "already running stream" might not have the expected number of 
`[REBALANCING -> RUNNING]` state transition. The reason is this line:
   ```
   [appDir2-StreamThread-1] State transition from PARTITIONS_ASSIGNED to 
PARTITIONS_ASSIGNED 
   ```
   Basically, The already running stream thread should have the state change: 
`[RUNNING to PARTITIONS_REVOKED]`, `[PARTITIONS_REVOKED to 
PARTITIONS_ASSIGNED](unstable)`, `[PARTITIONS_ASSIGNED to RUNNING]`, `[RUNNING 
to PARTITIONS_ASSIGNED](stable)`, `[PARTITIONS_ASSIGNED to RUNNING]`. Because 
it needs one more PARTITIONS_REVOKED step, it might be under 2 
PARTITIONS_ASSIGNED at the same time. And that's why the stream client doesn't 
change to RUNNING as we expected.
   
   Does that make sense? Any suggestion?

##
File path: 
streams/src/test/java/org/apache/kafka/streams/integration/EosBetaUpgradeIntegrationTest.java
##
@@ -246,18 +261,19 @@ public void shouldUpgradeFromEosAlphaToEosBeta() throws 
Exception {
 assignmentListener.waitForNextStableAssignment(MAX_WAIT_TIME_MS);
 waitForRunning(stateTransitions1);
 
-streams2Alpha = getKafkaStreams("appDir2", 
StreamsConfig.EXACTLY_ONCE);
+streams2Alpha = getKafkaStreams(APP_DIR_2, 
StreamsConfig.EXACTLY_ONCE);
 streams2Alpha.setStateListener(
 (newState, oldState) -> 
stateTransitions2.add(KeyValue.pair(oldState, newState))
 );
 stateTransitions1.clear();
 
-assignmentListener.prepareForRebalance();
+prevNumAssignments = assignmentListener.prepareForRebalance();
 streams2Alpha.cleanUp();
 streams2Alpha.start();
 assignmentListener.waitForNextStableAssignment(MAX_WAIT_TIME_MS);
+expectedNumAssignments = assignmentListener.numTotalAssignments() 
- prevNumAssignments;
+waitForNumRebalancingToRunning(stateTransitions2, 
expectedNumAssignments);
 waitForRunning(stateTransitions1);

Review comment:
   Thanks for your questions. Answer them below:
   > Also, you PR does not update all stages when we wait for state changes. 
Why? Would we not need to apply to logic each time?
   
   --> Yes, I should do that. What I did now is just focusing on the case: 1 
new started stream + 1 already running stream, which will have more failures 
here. But you're right, I should put the change in all stages.
   
   > If you reasoning is right, do we need to use 
waitForNumRebalancingToRunning here, too? If there are multiple rebalances, 
both clients would participate?
   
   --> Good question

[GitHub] [kafka] showuon commented on a change in pull request #9733: KAFKA-10017: fix 2 issues in EosBetaUpgradeIntegrationTest

2020-12-23 Thread GitBox


showuon commented on a change in pull request #9733:
URL: https://github.com/apache/kafka/pull/9733#discussion_r548399660



##
File path: 
streams/src/test/java/org/apache/kafka/streams/integration/EosBetaUpgradeIntegrationTest.java
##
@@ -246,18 +261,19 @@ public void shouldUpgradeFromEosAlphaToEosBeta() throws 
Exception {
 assignmentListener.waitForNextStableAssignment(MAX_WAIT_TIME_MS);
 waitForRunning(stateTransitions1);
 
-streams2Alpha = getKafkaStreams("appDir2", 
StreamsConfig.EXACTLY_ONCE);
+streams2Alpha = getKafkaStreams(APP_DIR_2, 
StreamsConfig.EXACTLY_ONCE);
 streams2Alpha.setStateListener(
 (newState, oldState) -> 
stateTransitions2.add(KeyValue.pair(oldState, newState))
 );
 stateTransitions1.clear();
 
-assignmentListener.prepareForRebalance();
+prevNumAssignments = assignmentListener.prepareForRebalance();
 streams2Alpha.cleanUp();
 streams2Alpha.start();
 assignmentListener.waitForNextStableAssignment(MAX_WAIT_TIME_MS);
+expectedNumAssignments = assignmentListener.numTotalAssignments() 
- prevNumAssignments;
+waitForNumRebalancingToRunning(stateTransitions2, 
expectedNumAssignments);
 waitForRunning(stateTransitions1);

Review comment:
   Thanks for your questions. Answer them below:
   > Also, you PR does not update all stages when we wait for state changes. 
Why? Would we not need to apply to logic each time?
   
   --> Yes, I should do that. What I did now is just focusing on the case: 1 
new started stream + 1 already running stream, which will have more failures 
here. But you're right, I should put the change in all stages.
   
   > If you reasoning is right, do we need to use 
waitForNumRebalancingToRunning here, too? If there are multiple rebalances, 
both clients would participate?
   
   --> Good question! I have explained in the previous comment 
(https://github.com/apache/kafka/pull/9733#issuecomment-748849742) though, I 
can explain again since I know you didn't understand exactly why I did this 
change before.
   
   I only `waitForNumRebalancingToRunning` for the new started stream only, not 
for the "already running stream" because I found sometimes if the stream runs 
fast enough, the "already running stream" might not have the expected number of 
`[REBALANCING -> RUNNING]` state transition. The reason is this line:
   ```
   [appDir2-StreamThread-1] State transition from PARTITIONS_ASSIGNED to 
PARTITIONS_ASSIGNED 
   ```
   Basically, The already running stream thread should have the state change: 
`[RUNNING to PARTITIONS_REVOKED]`, `[PARTITIONS_REVOKED to 
PARTITIONS_ASSIGNED](unstable)`, `[PARTITIONS_ASSIGNED to RUNNING]`, `[RUNNING 
to PARTITIONS_ASSIGNED](stable)`, `[PARTITIONS_ASSIGNED to RUNNING]`. Because 
it needs one more PARTITIONS_REVOKED step, it might be under 2 
PARTITIONS_ASSIGNED at the same time. And that's why the stream client doesn't 
change to RUNNING as we expected.
   
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (KAFKA-10871) StreamTask shouldn't take WallClockTime as input parameter in process method

2020-12-23 Thread Rohit Deshpande (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohit Deshpande resolved KAFKA-10871.
-
Resolution: Not A Bug

> StreamTask shouldn't take WallClockTime as input parameter in process method
> 
>
> Key: KAFKA-10871
> URL: https://issues.apache.org/jira/browse/KAFKA-10871
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.7.0
>Reporter: Rohit Deshpande
>Assignee: Rohit Deshpande
>Priority: Major
>  Labels: newbie
>
> While working on https://issues.apache.org/jira/browse/KAFKA-10062 I realized 
> process method in StreamTask is taking
> wallClockTime as input parameter which is redundant as StreamTask already 
> contains 
> time(https://github.com/apache/kafka/blob/2.5.0/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamTask.java#L75)
>  field which represents wallClockTime.
> In process method 
> (https://github.com/apache/kafka/blob/2.7/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamTask.java#L664),
>  wallClockTime can be passed from StreamTask's time field itself. 
> This method was changed as part of pr: 
> https://github.com/apache/kafka/pull/7997. 
> As part of https://issues.apache.org/jira/browse/KAFKA-10062, I believe 
> wallClockTime need not be stored in 
> ProcessorContext(https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/processor/internals/AbstractProcessorContext.java#L48)
>  but should be fetched from StreamTask's time field. Reference pr: 
> https://github.com/apache/kafka/pull/9744



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10871) StreamTask shouldn't take WallClockTime as input parameter in process method

2020-12-23 Thread Rohit Deshpande (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17254408#comment-17254408
 ] 

Rohit Deshpande commented on KAFKA-10871:
-

got it thanks [~mjsax] i will go ahead and close this. 

> StreamTask shouldn't take WallClockTime as input parameter in process method
> 
>
> Key: KAFKA-10871
> URL: https://issues.apache.org/jira/browse/KAFKA-10871
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.7.0
>Reporter: Rohit Deshpande
>Assignee: Rohit Deshpande
>Priority: Major
>  Labels: newbie
>
> While working on https://issues.apache.org/jira/browse/KAFKA-10062 I realized 
> process method in StreamTask is taking
> wallClockTime as input parameter which is redundant as StreamTask already 
> contains 
> time(https://github.com/apache/kafka/blob/2.5.0/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamTask.java#L75)
>  field which represents wallClockTime.
> In process method 
> (https://github.com/apache/kafka/blob/2.7/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamTask.java#L664),
>  wallClockTime can be passed from StreamTask's time field itself. 
> This method was changed as part of pr: 
> https://github.com/apache/kafka/pull/7997. 
> As part of https://issues.apache.org/jira/browse/KAFKA-10062, I believe 
> wallClockTime need not be stored in 
> ProcessorContext(https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/processor/internals/AbstractProcessorContext.java#L48)
>  but should be fetched from StreamTask's time field. Reference pr: 
> https://github.com/apache/kafka/pull/9744



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] chia7712 commented on a change in pull request #9778: KAFKA-10874 Fix flaky ClientQuotasRequestTest.testAlterIpQuotasRequest

2020-12-23 Thread GitBox


chia7712 commented on a change in pull request #9778:
URL: https://github.com/apache/kafka/pull/9778#discussion_r548375601



##
File path: core/src/test/scala/unit/kafka/server/ClientQuotasRequestTest.scala
##
@@ -212,7 +214,9 @@ class ClientQuotasRequestTest extends BaseRequestTest {
   InetAddress.getByName(unknownHost)
 else
   InetAddress.getByName(entityName)
-assertEquals(expectedMatches(entity), 
servers.head.socketServer.connectionQuotas.connectionRateForIp(entityIp), 0.01)
+TestUtils.retry(1L) {

Review comment:
   Could you add comment for this retry?
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] chia7712 commented on a change in pull request #9776: KAFKA-10878 Check failed message in ProtocolSerializationTest

2020-12-23 Thread GitBox


chia7712 commented on a change in pull request #9776:
URL: https://github.com/apache/kafka/pull/9776#discussion_r548375460



##
File path: 
clients/src/test/java/org/apache/kafka/common/protocol/types/ProtocolSerializationTest.java
##
@@ -414,7 +416,7 @@ public void 
testReadWhenOptionalDataMissingAtTheEndIsNotTolerated() {
 oldFormat.writeTo(buffer);
 buffer.flip();
 SchemaException e = assertThrows(SchemaException.class, () -> 
newSchema.read(buffer));
-e.getMessage().contains("Error reading field 'field2': 
java.nio.BufferUnderflowException");
+assertThat(e.getMessage(), containsString("Error reading field 
'field2': java.nio.BufferUnderflowException"));

Review comment:
   You are right. Checking ```Error reading field 'field2'``` should be 
enough for this test case.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] g1geordie commented on a change in pull request #9776: KAFKA-10878 Check failed message in ProtocolSerializationTest

2020-12-23 Thread GitBox


g1geordie commented on a change in pull request #9776:
URL: https://github.com/apache/kafka/pull/9776#discussion_r548375097



##
File path: 
clients/src/test/java/org/apache/kafka/common/protocol/types/ProtocolSerializationTest.java
##
@@ -414,7 +416,7 @@ public void 
testReadWhenOptionalDataMissingAtTheEndIsNotTolerated() {
 oldFormat.writeTo(buffer);
 buffer.flip();
 SchemaException e = assertThrows(SchemaException.class, () -> 
newSchema.read(buffer));
-e.getMessage().contains("Error reading field 'field2': 
java.nio.BufferUnderflowException");
+assertThat(e.getMessage(), containsString("Error reading field 
'field2': java.nio.BufferUnderflowException"));

Review comment:
   The SchemaException is throw by KAFKA. 
   it's message is  `fix string + field info   +  e.msg || e.className` 
   the lastest string here is className   
   
   but It's seems to me that only check the prefix is more better 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (KAFKA-10887) Migrate log4j-appender module to JUnit 5

2020-12-23 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai reassigned KAFKA-10887:
--

Assignee: GeordieMai  (was: Chia-Ping Tsai)

> Migrate log4j-appender module to JUnit 5
> 
>
> Key: KAFKA-10887
> URL: https://issues.apache.org/jira/browse/KAFKA-10887
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: GeordieMai
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10887) Migrate log4j-appender module to JUnit 5

2020-12-23 Thread Chia-Ping Tsai (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17254390#comment-17254390
 ] 

Chia-Ping Tsai commented on KAFKA-10887:


[~Geordie] sure. Go ahead!

> Migrate log4j-appender module to JUnit 5
> 
>
> Key: KAFKA-10887
> URL: https://issues.apache.org/jira/browse/KAFKA-10887
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: GeordieMai
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (KAFKA-10887) Migrate log4j-appender module to JUnit 5

2020-12-23 Thread GeordieMai (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17254369#comment-17254369
 ] 

GeordieMai edited comment on KAFKA-10887 at 12/24/20, 2:30 AM:
---

[~chia7712] can I take this ?  It seems to be able to give newbie . like me 


was (Author: geordie):
[~chia7712] can I take this ?  It seems to be able to give newbie . likes me 

> Migrate log4j-appender module to JUnit 5
> 
>
> Key: KAFKA-10887
> URL: https://issues.apache.org/jira/browse/KAFKA-10887
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-10887) Migrate log4j-appender module to JUnit 5

2020-12-23 Thread GeordieMai (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17254369#comment-17254369
 ] 

GeordieMai commented on KAFKA-10887:


[~chia7712] can I take this ?  It seems to be able to give newbie . likes me 

> Migrate log4j-appender module to JUnit 5
> 
>
> Key: KAFKA-10887
> URL: https://issues.apache.org/jira/browse/KAFKA-10887
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-10886) Kafka crashed in windows environment2

2020-12-23 Thread Wenbing Shen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenbing Shen reassigned KAFKA-10886:


Assignee: (was: Wenbing Shen)

> Kafka crashed in windows environment2
> -
>
> Key: KAFKA-10886
> URL: https://issues.apache.org/jira/browse/KAFKA-10886
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 2.0.0
> Environment: Windows Server
>Reporter: Wenbing Shen
>Priority: Critical
>  Labels: windows
> Attachments: windows_kafka_full_crash.patch
>
>
> I tried using the 
> [Kafka-9458|https://issues.apache.org/jira/browse/KAFKA-9458] patch to fix 
> the Kafka problem in the Windows environment, but it didn't seem to 
> work.These include restarting the Kafka service causing data to be deleted by 
> mistake, deleting a topic or a partition migration causing a disk to go 
> offline or the broker crashed.
> [2020-12-23 17:26:11,124] ERROR (kafka-request-handler-11 
> kafka.server.LogDirFailureChannel 76) Error while renaming dir for test-1-1 
> in log dir 
> E:\bigdata\works\kafka-2.0.0.5\kafka-2.0.0.5\bin\windows\.\..\..\data01\kafka\log[2020-12-23
>  17:26:11,124] ERROR (kafka-request-handler-11 
> kafka.server.LogDirFailureChannel 76) Error while renaming dir for test-1-1 
> in log dir 
> E:\bigdata\works\kafka-2.0.0.5\kafka-2.0.0.5\bin\windows\.\..\..\data01\kafka\logjava.nio.file.AccessDeniedException:
>  
> E:\bigdata\works\kafka-2.0.0.5\kafka-2.0.0.5\bin\windows\.\..\..\data01\kafka\log\test-1-1
>  -> 
> E:\bigdata\works\kafka-2.0.0.5\kafka-2.0.0.5\bin\windows\.\..\..\data01\kafka\log\test-1-1.7c4fdeca40294cc38aed815b1a8e7663-delete
>  at sun.nio.fs.WindowsException.translateToIOException(Unknown Source) at 
> sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source) at 
> sun.nio.fs.WindowsFileCopy.move(Unknown Source) at 
> sun.nio.fs.WindowsFileSystemProvider.move(Unknown Source) at 
> java.nio.file.Files.move(Unknown Source) at 
> org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:786) at 
> kafka.log.Log$$anonfun$renameDir$1.apply$mcV$sp(Log.scala:689) at 
> kafka.log.Log$$anonfun$renameDir$1.apply(Log.scala:687) at 
> kafka.log.Log$$anonfun$renameDir$1.apply(Log.scala:687) at 
> kafka.log.Log.maybeHandleIOException(Log.scala:1837) at 
> kafka.log.Log.renameDir(Log.scala:687) at 
> kafka.log.LogManager.asyncDelete(LogManager.scala:833) at 
> kafka.cluster.Partition$$anonfun$delete$1.apply(Partition.scala:267) at 
> kafka.cluster.Partition$$anonfun$delete$1.apply(Partition.scala:262) at 
> kafka.utils.CoreUtils$.inLock(CoreUtils.scala:256) at 
> kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:264) at 
> kafka.cluster.Partition.delete(Partition.scala:262) at 
> kafka.server.ReplicaManager.stopReplica(ReplicaManager.scala:341) at 
> kafka.server.ReplicaManager$$anonfun$stopReplicas$2.apply(ReplicaManager.scala:371)
>  at 
> kafka.server.ReplicaManager$$anonfun$stopReplicas$2.apply(ReplicaManager.scala:369)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
> kafka.server.ReplicaManager.stopReplicas(ReplicaManager.scala:369) at 
> kafka.server.KafkaApis.handleStopReplicaRequest(KafkaApis.scala:222) at 
> kafka.server.KafkaApis.handle(KafkaApis.scala:113) at 
> kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:73) at 
> java.lang.Thread.run(Unknown Source) Suppressed: 
> java.nio.file.AccessDeniedException: 
> E:\bigdata\works\kafka-2.0.0.5\kafka-2.0.0.5\bin\windows\.\..\..\data01\kafka\log\test-1-1
>  -> 
> E:\bigdata\works\kafka-2.0.0.5\kafka-2.0.0.5\bin\windows\.\..\..\data01\kafka\log\test-1-1.7c4fdeca40294cc38aed815b1a8e7663-delete
>  at sun.nio.fs.WindowsException.translateToIOException(Unknown Source) at 
> sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source) at 
> sun.nio.fs.WindowsFileCopy.move(Unknown Source) at 
> sun.nio.fs.WindowsFileSystemProvider.move(Unknown Source) at 
> java.nio.file.Files.move(Unknown Source) at 
> org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:783) 
> ... 23 more[2020-12-23 17:26:11,127] INFO (LogDirFailureHandler 
> kafka.server.ReplicaManager 66) [ReplicaManager broker=1002] Stopping serving 
> replicas in dir 
> E:\bigdata\works\kafka-2.0.0.5\kafka-2.0.0.5\bin\windows\.\..\..\data01\kafka\log



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-10886) Kafka crashed in windows environment2

2020-12-23 Thread Wenbing Shen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenbing Shen reassigned KAFKA-10886:


Assignee: Wenbing Shen

> Kafka crashed in windows environment2
> -
>
> Key: KAFKA-10886
> URL: https://issues.apache.org/jira/browse/KAFKA-10886
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 2.0.0
> Environment: Windows Server
>Reporter: Wenbing Shen
>Assignee: Wenbing Shen
>Priority: Critical
>  Labels: windows
> Attachments: windows_kafka_full_crash.patch
>
>
> I tried using the 
> [Kafka-9458|https://issues.apache.org/jira/browse/KAFKA-9458] patch to fix 
> the Kafka problem in the Windows environment, but it didn't seem to 
> work.These include restarting the Kafka service causing data to be deleted by 
> mistake, deleting a topic or a partition migration causing a disk to go 
> offline or the broker crashed.
> [2020-12-23 17:26:11,124] ERROR (kafka-request-handler-11 
> kafka.server.LogDirFailureChannel 76) Error while renaming dir for test-1-1 
> in log dir 
> E:\bigdata\works\kafka-2.0.0.5\kafka-2.0.0.5\bin\windows\.\..\..\data01\kafka\log[2020-12-23
>  17:26:11,124] ERROR (kafka-request-handler-11 
> kafka.server.LogDirFailureChannel 76) Error while renaming dir for test-1-1 
> in log dir 
> E:\bigdata\works\kafka-2.0.0.5\kafka-2.0.0.5\bin\windows\.\..\..\data01\kafka\logjava.nio.file.AccessDeniedException:
>  
> E:\bigdata\works\kafka-2.0.0.5\kafka-2.0.0.5\bin\windows\.\..\..\data01\kafka\log\test-1-1
>  -> 
> E:\bigdata\works\kafka-2.0.0.5\kafka-2.0.0.5\bin\windows\.\..\..\data01\kafka\log\test-1-1.7c4fdeca40294cc38aed815b1a8e7663-delete
>  at sun.nio.fs.WindowsException.translateToIOException(Unknown Source) at 
> sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source) at 
> sun.nio.fs.WindowsFileCopy.move(Unknown Source) at 
> sun.nio.fs.WindowsFileSystemProvider.move(Unknown Source) at 
> java.nio.file.Files.move(Unknown Source) at 
> org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:786) at 
> kafka.log.Log$$anonfun$renameDir$1.apply$mcV$sp(Log.scala:689) at 
> kafka.log.Log$$anonfun$renameDir$1.apply(Log.scala:687) at 
> kafka.log.Log$$anonfun$renameDir$1.apply(Log.scala:687) at 
> kafka.log.Log.maybeHandleIOException(Log.scala:1837) at 
> kafka.log.Log.renameDir(Log.scala:687) at 
> kafka.log.LogManager.asyncDelete(LogManager.scala:833) at 
> kafka.cluster.Partition$$anonfun$delete$1.apply(Partition.scala:267) at 
> kafka.cluster.Partition$$anonfun$delete$1.apply(Partition.scala:262) at 
> kafka.utils.CoreUtils$.inLock(CoreUtils.scala:256) at 
> kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:264) at 
> kafka.cluster.Partition.delete(Partition.scala:262) at 
> kafka.server.ReplicaManager.stopReplica(ReplicaManager.scala:341) at 
> kafka.server.ReplicaManager$$anonfun$stopReplicas$2.apply(ReplicaManager.scala:371)
>  at 
> kafka.server.ReplicaManager$$anonfun$stopReplicas$2.apply(ReplicaManager.scala:369)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
> kafka.server.ReplicaManager.stopReplicas(ReplicaManager.scala:369) at 
> kafka.server.KafkaApis.handleStopReplicaRequest(KafkaApis.scala:222) at 
> kafka.server.KafkaApis.handle(KafkaApis.scala:113) at 
> kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:73) at 
> java.lang.Thread.run(Unknown Source) Suppressed: 
> java.nio.file.AccessDeniedException: 
> E:\bigdata\works\kafka-2.0.0.5\kafka-2.0.0.5\bin\windows\.\..\..\data01\kafka\log\test-1-1
>  -> 
> E:\bigdata\works\kafka-2.0.0.5\kafka-2.0.0.5\bin\windows\.\..\..\data01\kafka\log\test-1-1.7c4fdeca40294cc38aed815b1a8e7663-delete
>  at sun.nio.fs.WindowsException.translateToIOException(Unknown Source) at 
> sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source) at 
> sun.nio.fs.WindowsFileCopy.move(Unknown Source) at 
> sun.nio.fs.WindowsFileSystemProvider.move(Unknown Source) at 
> java.nio.file.Files.move(Unknown Source) at 
> org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:783) 
> ... 23 more[2020-12-23 17:26:11,127] INFO (LogDirFailureHandler 
> kafka.server.ReplicaManager 66) [ReplicaManager broker=1002] Stopping serving 
> replicas in dir 
> E:\bigdata\works\kafka-2.0.0.5\kafka-2.0.0.5\bin\windows\.\..\..\data01\kafka\log



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10722) Timestamped store is used even if not desired

2020-12-23 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-10722.
-
Fix Version/s: 2.8.0
   Resolution: Fixed

Added you to the list of contributors and assigned the ticket to you. You can 
now also self-assign tickets.

Thanks for reporting the issue and for helping to improve the JavaDocs.

> Timestamped store is used even if not desired
> -
>
> Key: KAFKA-10722
> URL: https://issues.apache.org/jira/browse/KAFKA-10722
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.4.1, 2.6.0
>Reporter: fml2
>Assignee: fml2
>Priority: Major
> Fix For: 2.8.0
>
>
> I have a stream which I then group and aggregate (this results in a KTable). 
> When aggregating, I explicitly tell to materialize the result table using a 
> usual (not timestamped) store.
> After that, the KTable is filtered and streamed. This stream is processed by 
> a processor that accesses the store.
> The problem/bug is that even if I tell to use a non-timestamped store, a 
> timestamped one is used, which leads to a ClassCastException in the processor 
> (it iterates over the store and expects the items to be of type "KeyValue" 
> but they are of type "ValueAndTimestamp").
> Here is the code (schematically).
> First, I define the topology:
> {code:java}
> KTable table = ...aggregate(
>   initializer, // initializer for the KTable row
>   aggregator, // aggregator
>   Materialized.as(Stores.persistentKeyValueStore("MyStore")) // <-- 
> Non-Timestamped!
> .withKeySerde(...).withValueSerde(...));
> table.toStream().process(theProcessor);
> {code}
> In the class for the processor:
> {code:java}
> public void init(ProcessorContext context) {
>var store = context.getStateStore("MyStore"); // Returns a 
> TimestampedKeyValueStore!
> }
> {code}
> A timestamped store is returned even if I explicitly told to use a 
> non-timestamped one!
>  
> I tried to find the cause for this behaviour and think that I've found it. It 
> lies in this line: 
> [https://github.com/apache/kafka/blob/cfc813537e955c267106eea989f6aec4879e14d7/streams/src/main/java/org/apache/kafka/streams/kstream/internals/KGroupedStreamImpl.java#L241]
> There, TimestampedKeyValueStoreMaterializer is used regardless of whether 
> materialization supplier is a timestamped one or not.
> I think this is a bug.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-10722) Timestamped store is used even if not desired

2020-12-23 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reassigned KAFKA-10722:
---

Assignee: fml2

> Timestamped store is used even if not desired
> -
>
> Key: KAFKA-10722
> URL: https://issues.apache.org/jira/browse/KAFKA-10722
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.4.1, 2.6.0
>Reporter: fml2
>Assignee: fml2
>Priority: Major
>
> I have a stream which I then group and aggregate (this results in a KTable). 
> When aggregating, I explicitly tell to materialize the result table using a 
> usual (not timestamped) store.
> After that, the KTable is filtered and streamed. This stream is processed by 
> a processor that accesses the store.
> The problem/bug is that even if I tell to use a non-timestamped store, a 
> timestamped one is used, which leads to a ClassCastException in the processor 
> (it iterates over the store and expects the items to be of type "KeyValue" 
> but they are of type "ValueAndTimestamp").
> Here is the code (schematically).
> First, I define the topology:
> {code:java}
> KTable table = ...aggregate(
>   initializer, // initializer for the KTable row
>   aggregator, // aggregator
>   Materialized.as(Stores.persistentKeyValueStore("MyStore")) // <-- 
> Non-Timestamped!
> .withKeySerde(...).withValueSerde(...));
> table.toStream().process(theProcessor);
> {code}
> In the class for the processor:
> {code:java}
> public void init(ProcessorContext context) {
>var store = context.getStateStore("MyStore"); // Returns a 
> TimestampedKeyValueStore!
> }
> {code}
> A timestamped store is returned even if I explicitly told to use a 
> non-timestamped one!
>  
> I tried to find the cause for this behaviour and think that I've found it. It 
> lies in this line: 
> [https://github.com/apache/kafka/blob/cfc813537e955c267106eea989f6aec4879e14d7/streams/src/main/java/org/apache/kafka/streams/kstream/internals/KGroupedStreamImpl.java#L241]
> There, TimestampedKeyValueStoreMaterializer is used regardless of whether 
> materialization supplier is a timestamped one or not.
> I think this is a bug.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] mjsax commented on pull request #9606: [KAFKA-10722] doc: Improve JavaDoc for KGroupedStream.aggregate and other similar methods

2020-12-23 Thread GitBox


mjsax commented on pull request #9606:
URL: https://github.com/apache/kafka/pull/9606#issuecomment-750589950


   Thanks for updating the PR. Merged to `trunk`.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] mjsax merged pull request #9606: [KAFKA-10722] doc: Improve JavaDoc for KGroupedStream.aggregate and other similar methods

2020-12-23 Thread GitBox


mjsax merged pull request #9606:
URL: https://github.com/apache/kafka/pull/9606


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] fml2 commented on a change in pull request #9606: [KAFKA-10722] doc: Improve JavaDoc for KGroupedStream.aggregate and other similar methods

2020-12-23 Thread GitBox


fml2 commented on a change in pull request #9606:
URL: https://github.com/apache/kafka/pull/9606#discussion_r548280702



##
File path: 
streams/src/main/java/org/apache/kafka/streams/kstream/TimeWindowedKStream.java
##
@@ -67,7 +67,8 @@
  * {@link StreamsConfig#CACHE_MAX_BYTES_BUFFERING_CONFIG cache size}, and
  * {@link StreamsConfig#COMMIT_INTERVAL_MS_CONFIG commit intervall}.
  * 
- * For failure and recovery the store will be backed by an internal 
changelog topic that will be created in Kafka.
+ * For failure and recovery the store (which always will be of type {@link 
TimestampedKeyValueStore}) will be backed by

Review comment:
   Updated





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] fml2 commented on a change in pull request #9606: [KAFKA-10722] doc: Improve JavaDoc for KGroupedStream.aggregate and other similar methods

2020-12-23 Thread GitBox


fml2 commented on a change in pull request #9606:
URL: https://github.com/apache/kafka/pull/9606#discussion_r548280588



##
File path: 
streams/src/main/java/org/apache/kafka/streams/kstream/TimeWindowedCogroupedKStream.java
##
@@ -77,7 +77,8 @@
  * {@link StreamsConfig#CACHE_MAX_BYTES_BUFFERING_CONFIG cache size}, and
  * {@link StreamsConfig#COMMIT_INTERVAL_MS_CONFIG commit intervall}.
  * 
- * For failure and recovery the store will be backed by an internal 
changelog topic that will be created in Kafka.
+ * For failure and recovery the store (which always will be of type {@link 
TimestampedKeyValueStore}) will be backed by

Review comment:
   Updated





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] C0urante opened a new pull request #9784: MINOR: Fix connector startup error logging

2020-12-23 Thread GitBox


C0urante opened a new pull request #9784:
URL: https://github.com/apache/kafka/pull/9784


   If a connector fails on startup, the original cause of the error gets 
discarded by the framework and the only message that gets logged looks like 
this:
   
   ```
   [2020-12-04 16:46:30,464] ERROR [Worker clientId=connect-1, 
groupId=connect-cluster] Failed to start connector 'conn-1' 
(org.apache.kafka.connect.runtime.distributed.DistributedHerder)
   org.apache.kafka.connect.errors.ConnectException: Failed to start connector: 
conn-1
   at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.lambda$startConnector$5(DistributedHerder.java:1297)
   at 
org.apache.kafka.connect.runtime.Worker.startConnector(Worker.java:258)
   at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:1321)
   at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.access$1400(DistributedHerder.java:127)
   at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder$13.call(DistributedHerder.java:1329)
   at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder$13.call(DistributedHerder.java:1325)
   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
   at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
   at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
   at java.base/java.lang.Thread.run(Thread.java:834)
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (KAFKA-10876) Duplicate connector/task create requests lead to incorrect FAILED status

2020-12-23 Thread Chris Egerton (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17254244#comment-17254244
 ] 

Chris Egerton commented on KAFKA-10876:
---

[~xakassi] they're definitely related, but I think the two issues are slightly 
different. KAFKA-7878 is about one bug that may lead to attempted duplicate 
task instantiation on a worker; there are other potential bugs that could lead 
to that as well. This issue isn't really centered around any of those bugs, and 
more about how the framework responds when it runs into that case. Ideally, all 
of those bugs would be fixed and this ticket would be irrelevant, but 
until/unless we have complete confidence that that's the case, we should still 
alter the behavior fo the framework to not incorrectly mark tasks as {{FAILED}} 
when that happens.

> Duplicate connector/task create requests lead to incorrect FAILED status
> 
>
> Key: KAFKA-10876
> URL: https://issues.apache.org/jira/browse/KAFKA-10876
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Chris Egerton
>Priority: Major
>
> If a Connect worker tries to start a connector or task that it is already 
> running, an error will be logged and the connector/task will be marked as 
> {{FAILED}}. This logic is implemented in several places:
>  * 
> [https://github.com/apache/kafka/blob/300909d9e60eb1d5e80f4d744d3662a105ac0c15/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Worker.java#L257-L262]
>  * 
> [https://github.com/apache/kafka/blob/300909d9e60eb1d5e80f4d744d3662a105ac0c15/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Worker.java#L299-L306]
>  * 
> [https://github.com/apache/kafka/blob/300909d9e60eb1d5e80f4d744d3662a105ac0c15/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Worker.java#L511-L512]
>  * 
> [https://github.com/apache/kafka/blob/300909d9e60eb1d5e80f4d744d3662a105ac0c15/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Worker.java#L570-L572]
>  
> Although it's certainly abnormal for a worker to run into this case and an 
> {{ERROR}}-level log message is warranted when it occurs, the connector/task 
> should not be marked as {{FAILED}}, as there is still an instance of that 
> connector/task still running on the worker.
>  
> Either the worker logic should be updated to avoid marking connectors/tasks 
> as {{FAILED}} in this case, or it should manually halt the existing 
> connector/task before creating a new instance in its place. The first option 
> is easier and more intuitive, but if it's ever possible that the 
> already-running connector/task instance has an outdated configuration and the 
> to-be-created connector/task has an up-to-date configuration, the second 
> option would have correct behavior (while the first would not).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] mjsax merged pull request #9668: MINOR: add test for repartition/source-topic/changelog optimization

2020-12-23 Thread GitBox


mjsax merged pull request #9668:
URL: https://github.com/apache/kafka/pull/9668


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] mjsax commented on a change in pull request #9606: [KAFKA-10722] doc: Improve JavaDoc for KGroupedStream.aggregate and other similar methods

2020-12-23 Thread GitBox


mjsax commented on a change in pull request #9606:
URL: https://github.com/apache/kafka/pull/9606#discussion_r548180364



##
File path: 
streams/src/main/java/org/apache/kafka/streams/kstream/TimeWindowedCogroupedKStream.java
##
@@ -77,7 +77,8 @@
  * {@link StreamsConfig#CACHE_MAX_BYTES_BUFFERING_CONFIG cache size}, and
  * {@link StreamsConfig#COMMIT_INTERVAL_MS_CONFIG commit intervall}.
  * 
- * For failure and recovery the store will be backed by an internal 
changelog topic that will be created in Kafka.
+ * For failure and recovery the store (which always will be of type {@link 
TimestampedKeyValueStore}) will be backed by

Review comment:
   For time-windows, it would be a `TimestampedWindowStore` (not 
tkv-store). (same below for the other method of this class)
   
   Note the signature of `Materialized>` that 
uses `WindowStore`, not `KeyValueStore`.

##
File path: 
streams/src/main/java/org/apache/kafka/streams/kstream/TimeWindowedKStream.java
##
@@ -67,7 +67,8 @@
  * {@link StreamsConfig#CACHE_MAX_BYTES_BUFFERING_CONFIG cache size}, and
  * {@link StreamsConfig#COMMIT_INTERVAL_MS_CONFIG commit intervall}.
  * 
- * For failure and recovery the store will be backed by an internal 
changelog topic that will be created in Kafka.
+ * For failure and recovery the store (which always will be of type {@link 
TimestampedKeyValueStore}) will be backed by

Review comment:
   As above. Should we `TimestampedWindowStore` for this class.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (KAFKA-10815) EosTestDriver#verifyAllTransactionFinished should break loop if all partitions are verified

2020-12-23 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-10815:

Fix Version/s: (was: 2.6.1)
   2.6.2

> EosTestDriver#verifyAllTransactionFinished should break loop if all 
> partitions are verified
> ---
>
> Key: KAFKA-10815
> URL: https://issues.apache.org/jira/browse/KAFKA-10815
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Major
> Fix For: 2.8.0, 2.7.1, 2.6.2
>
>
> If we don't break it when all partitions are verified, the loop will take 10 
> mins ...



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] mjsax commented on a change in pull request #9733: KAFKA-10017: fix 2 issues in EosBetaUpgradeIntegrationTest

2020-12-23 Thread GitBox


mjsax commented on a change in pull request #9733:
URL: https://github.com/apache/kafka/pull/9733#discussion_r548160122



##
File path: 
streams/src/test/java/org/apache/kafka/streams/integration/EosBetaUpgradeIntegrationTest.java
##
@@ -246,18 +261,19 @@ public void shouldUpgradeFromEosAlphaToEosBeta() throws 
Exception {
 assignmentListener.waitForNextStableAssignment(MAX_WAIT_TIME_MS);
 waitForRunning(stateTransitions1);
 
-streams2Alpha = getKafkaStreams("appDir2", 
StreamsConfig.EXACTLY_ONCE);
+streams2Alpha = getKafkaStreams(APP_DIR_2, 
StreamsConfig.EXACTLY_ONCE);
 streams2Alpha.setStateListener(
 (newState, oldState) -> 
stateTransitions2.add(KeyValue.pair(oldState, newState))
 );
 stateTransitions1.clear();
 
-assignmentListener.prepareForRebalance();
+prevNumAssignments = assignmentListener.prepareForRebalance();
 streams2Alpha.cleanUp();
 streams2Alpha.start();
 assignmentListener.waitForNextStableAssignment(MAX_WAIT_TIME_MS);
+expectedNumAssignments = assignmentListener.numTotalAssignments() 
- prevNumAssignments;
+waitForNumRebalancingToRunning(stateTransitions2, 
expectedNumAssignments);
 waitForRunning(stateTransitions1);

Review comment:
   If you reasoning is right, do we need to use 
`waitForNumRebalancingToRunning` here, too? If there are multiple rebalances, 
both clients would participate?
   
   Also, you PR does not update all stages when we wait for state changes. Why? 
Would we not need to apply to logic each time?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] mjsax commented on pull request #9733: KAFKA-10017: fix 2 issues in EosBetaUpgradeIntegrationTest

2020-12-23 Thread GitBox


mjsax commented on pull request #9733:
URL: https://github.com/apache/kafka/pull/9733#issuecomment-750436773


   Thanks for the input! The test behavior is slightly different in `2.6` 
compared to `2.7/trunk` though and thus, I am not sure if we can apply the same 
reasoning (that is also the reason why there is a separate PR for `2.6`). I did 
not have time to look into the `2.6` PR yet, as I wanted to resolve this PR 
first.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] chia7712 commented on pull request #9706: KAFKA-10815 EosTestDriver#verifyAllTransactionFinished should break loop if all partitions are verified

2020-12-23 Thread GitBox


chia7712 commented on pull request #9706:
URL: https://github.com/apache/kafka/pull/9706#issuecomment-750436117


   > thanks for getting this into 2.7 and 2.6. (Btw: there is no need to do 
PRs. You can also cherry-pick and push directly.)
   
   Thanks for kind reminder!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] mjsax commented on pull request #9706: KAFKA-10815 EosTestDriver#verifyAllTransactionFinished should break loop if all partitions are verified

2020-12-23 Thread GitBox


mjsax commented on pull request #9706:
URL: https://github.com/apache/kafka/pull/9706#issuecomment-750433680


   @chia7712 -- thanks for getting this into 2.7 and 2.6. (Btw: there is no 
need to do PRs. You can also cherry-pick and push directly.)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] bbejeck commented on pull request #9774: MINOR: Add 2.7.0 release to broker and client compat tests

2020-12-23 Thread GitBox


bbejeck commented on pull request #9774:
URL: https://github.com/apache/kafka/pull/9774#issuecomment-750411633


   System tests passed



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (KAFKA-10887) Migrate log4j-appender module to JUnit 5

2020-12-23 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated KAFKA-10887:
---
Summary: Migrate log4j-appender module to JUnit 5  (was: Migrate 
log4j-appender module to Junit 5)

> Migrate log4j-appender module to JUnit 5
> 
>
> Key: KAFKA-10887
> URL: https://issues.apache.org/jira/browse/KAFKA-10887
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10887) Migrate log4j-appender module to Junit 5

2020-12-23 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-10887:
--

 Summary: Migrate log4j-appender module to Junit 5
 Key: KAFKA-10887
 URL: https://issues.apache.org/jira/browse/KAFKA-10887
 Project: Kafka
  Issue Type: Sub-task
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-7341) Migrate core module to JUnit 5

2020-12-23 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai reassigned KAFKA-7341:
-

Assignee: Chia-Ping Tsai  (was: Ismael Juma)

> Migrate core module to JUnit 5
> --
>
> Key: KAFKA-7341
> URL: https://issues.apache.org/jira/browse/KAFKA-7341
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Assignee: Chia-Ping Tsai
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-7342) Migrate streams modules to JUnit 5

2020-12-23 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai reassigned KAFKA-7342:
-

Assignee: Chia-Ping Tsai  (was: Ismael Juma)

> Migrate streams modules to JUnit 5
> --
>
> Key: KAFKA-7342
> URL: https://issues.apache.org/jira/browse/KAFKA-7342
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Assignee: Chia-Ping Tsai
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] chia7712 merged pull request #9763: MINOR: Use ApiUtils' methods static imported consistently.

2020-12-23 Thread GitBox


chia7712 merged pull request #9763:
URL: https://github.com/apache/kafka/pull/9763


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (KAFKA-10815) EosTestDriver#verifyAllTransactionFinished should break loop if all partitions are verified

2020-12-23 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-10815.

Resolution: Fixed

> EosTestDriver#verifyAllTransactionFinished should break loop if all 
> partitions are verified
> ---
>
> Key: KAFKA-10815
> URL: https://issues.apache.org/jira/browse/KAFKA-10815
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Major
> Fix For: 2.6.1, 2.8.0, 2.7.1
>
>
> If we don't break it when all partitions are verified, the loop will take 10 
> mins ...



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] chia7712 merged pull request #9783: KAFKA-10815 EosTestDriver#verifyAllTransactionFinished should break l…

2020-12-23 Thread GitBox


chia7712 merged pull request #9783:
URL: https://github.com/apache/kafka/pull/9783


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] chia7712 merged pull request #9782: KAFKA-10815 EosTestDriver#verifyAllTransactionFinished should break l…

2020-12-23 Thread GitBox


chia7712 merged pull request #9782:
URL: https://github.com/apache/kafka/pull/9782


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] chia7712 commented on a change in pull request #9776: KAFKA-10878 Check failed message in ProtocolSerializationTest

2020-12-23 Thread GitBox


chia7712 commented on a change in pull request #9776:
URL: https://github.com/apache/kafka/pull/9776#discussion_r548031919



##
File path: 
clients/src/test/java/org/apache/kafka/common/protocol/types/ProtocolSerializationTest.java
##
@@ -414,7 +416,7 @@ public void 
testReadWhenOptionalDataMissingAtTheEndIsNotTolerated() {
 oldFormat.writeTo(buffer);
 buffer.flip();
 SchemaException e = assertThrows(SchemaException.class, () -> 
newSchema.read(buffer));
-e.getMessage().contains("Error reading field 'field2': 
java.nio.BufferUnderflowException");
+assertThat(e.getMessage(), containsString("Error reading field 
'field2': java.nio.BufferUnderflowException"));

Review comment:
   Is this error message from java standard library? If so, does it got 
different on different location (i.e different language)? 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (KAFKA-6579) Consolidate window store and session store unit tests into a single class

2020-12-23 Thread Stanislav Pak (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-6579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17254130#comment-17254130
 ] 

Stanislav Pak commented on KAFKA-6579:
--

This looks non-trivial based on previous attempts, but I will give it a shot. 
Try lock [~guozhang]..

> Consolidate window store and session store unit tests into a single class
> -
>
> Key: KAFKA-6579
> URL: https://issues.apache.org/jira/browse/KAFKA-6579
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Guozhang Wang
>Priority: Major
>  Labels: newbie, unit-test
>
> For key value store, we have a {{AbstractKeyValueStoreTest}} that is shared 
> among all its implementations; however for window and session stores, each 
> class has its own independent unit test classes that do not share the test 
> coverage. In fact, many of these test classes share the same unit test 
> functions (e.g. {{RocksDBWindowStoreTest}}, 
> {{CompositeReadOnlyWindowStoreTest}} and {{CachingWindowStoreTest}}).
> It is better to use the same pattern as for key value stores to consolidate 
> these test functions into a shared base class.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] chia7712 commented on pull request #9520: MINOR: replace test "expected" parameter by assertThrows

2020-12-23 Thread GitBox


chia7712 commented on pull request #9520:
URL: https://github.com/apache/kafka/pull/9520#issuecomment-750355111


   > Feel free to take one or more of the other ones. core would be a good 
candidate perhaps.
   
   I’d like to take over both core module and stream module :)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] chia7712 commented on pull request #9520: MINOR: replace test "expected" parameter by assertThrows

2020-12-23 Thread GitBox


chia7712 commented on pull request #9520:
URL: https://github.com/apache/kafka/pull/9520#issuecomment-750351742


   > I was about halfway through the review of the second to last commit. :) 
Let's get this PR over the line and not change it further please. 
   
   I am so sorry my force push obstructed you from reviewing. I will never do 
that again :(



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] ijuma edited a comment on pull request #9520: MINOR: replace test "expected" parameter by assertThrows

2020-12-23 Thread GitBox


ijuma edited a comment on pull request #9520:
URL: https://github.com/apache/kafka/pull/9520#issuecomment-750345209


   @chia7712 Thanks. I was about halfway through the review of the second to 
last commit. :) Let's get this PR over the line and not change it further 
please. Yes, I had filed a few JIRAs for migrating to JUnit 5, see 
https://issues.apache.org/jira/browse/KAFKA-7339.
   
   I think we should do it on a module per module basis to make it easier to 
review (the JIRAs are structured in that way). I was going to try the clients 
module after this PR is merged. Feel free to take one or more of the other 
ones. `core` would be a good candidate perhaps.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] ijuma commented on pull request #9520: MINOR: replace test "expected" parameter by assertThrows

2020-12-23 Thread GitBox


ijuma commented on pull request #9520:
URL: https://github.com/apache/kafka/pull/9520#issuecomment-750345209


   @chia7712 Thanks. I was about halfway through the review of the second to 
last commit. :) Let's get this PR over the line and not change it further 
please. Yes, I had filed a few JIRAs for migrating to JUnit 5, see 
https://issues.apache.org/jira/browse/KAFKA-7339.
   
   I think we should do it on a module per module basis to make it easier to 
review (the JIRAs are structured in that way). I was going to try the clients 
module after this PR is merged. Feel free to take one of the other ones. `core` 
would be a good candidate perhaps.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] chia7712 edited a comment on pull request #9520: MINOR: replace test "expected" parameter by assertThrows

2020-12-23 Thread GitBox


chia7712 edited a comment on pull request #9520:
URL: https://github.com/apache/kafka/pull/9520#issuecomment-750005283


   @ijuma the latest commit replaces all ExpectedException or @Test(expected = 
Exception.class) by assertThrows. It seems to me we can get closer to junit 5 
if this PR is got into trunk. Furthermore, I'd like to file following PR to 
remove all junit 4 code.
   
   1. Replace ```org.hamcrest``` by junit
   1. Replace ScalaTest by junit
   2. Replace junit 4 APIs by Junit 5 for each module
   1. rewrite parameter tests
   2. replace ```org.junit``` by ```org.junit.jupiter.api```
   3. replace ```IntegrationTest``` by tag ```Integration```
   4. replace ```useJUnitPlatform``` by ```useJUnit``` 
   5. remove ```testCompile libs.junitVintageEngine```
   
   WDYT?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (KAFKA-9458) Kafka crashed in windows environment

2020-12-23 Thread Wenbing Shen (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17251530#comment-17251530
 ] 

Wenbing Shen edited comment on KAFKA-9458 at 12/23/20, 9:53 AM:


The current patch is deficient. When topic is deleted or partition migration is 
carried out, the service will still be suspended or the disk will be offline. I 
have provided the following patch file, which is effective for self-test,My 
Kafka version is 2.0.0 .

[^kafka_windows_crash_by_delete_topic_and_Partition_migration]

 

https://issues.apache.org/jira/browse/KAFKA-10886  I provide a complete and 
effective patch.


was (Author: wenbing.shen):
The current patch is deficient. When topic is deleted or partition migration is 
carried out, the service will still be suspended or the disk will be offline. I 
have provided the following patch file, which is effective for self-test,My 
Kafka version is 2.0.0 .

[^kafka_windows_crash_by_delete_topic_and_Partition_migration]

> Kafka crashed in windows environment
> 
>
> Key: KAFKA-9458
> URL: https://issues.apache.org/jira/browse/KAFKA-9458
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 2.4.0
> Environment: Windows Server 2019
>Reporter: hirik
>Priority: Critical
>  Labels: windows
> Attachments: Windows_crash_fix.patch, 
> kafka_windows_crash_by_delete_topic_and_Partition_migration, logs.zip
>
>
> Hi,
> while I was trying to validate Kafka retention policy, Kafka Server crashed 
> with below exception trace. 
> [2020-01-21 17:10:40,475] INFO [Log partition=test1-3, 
> dir=C:\Users\Administrator\Downloads\kafka\bin\windows\..\..\data\kafka] 
> Rolled new log segment at offset 1 in 52 ms. (kafka.log.Log)
> [2020-01-21 17:10:40,484] ERROR Error while deleting segments for test1-3 in 
> dir C:\Users\Administrator\Downloads\kafka\bin\windows\..\..\data\kafka 
> (kafka.server.LogDirFailureChannel)
> java.nio.file.FileSystemException: 
> C:\Users\Administrator\Downloads\kafka\bin\windows\..\..\data\kafka\test1-3\.timeindex
>  -> 
> C:\Users\Administrator\Downloads\kafka\bin\windows\..\..\data\kafka\test1-3\.timeindex.deleted:
>  The process cannot access the file because it is being used by another 
> process.
> at 
> java.base/sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:92)
>  at 
> java.base/sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:103)
>  at java.base/sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:395)
>  at 
> java.base/sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:292)
>  at java.base/java.nio.file.Files.move(Files.java:1425)
>  at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:795)
>  at kafka.log.AbstractIndex.renameTo(AbstractIndex.scala:209)
>  at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:497)
>  at kafka.log.Log.$anonfun$deleteSegmentFiles$1(Log.scala:2206)
>  at kafka.log.Log.$anonfun$deleteSegmentFiles$1$adapted(Log.scala:2206)
>  at scala.collection.immutable.List.foreach(List.scala:305)
>  at kafka.log.Log.deleteSegmentFiles(Log.scala:2206)
>  at kafka.log.Log.removeAndDeleteSegments(Log.scala:2191)
>  at kafka.log.Log.$anonfun$deleteSegments$2(Log.scala:1700)
>  at scala.runtime.java8.JFunction0$mcI$sp.apply(JFunction0$mcI$sp.scala:17)
>  at kafka.log.Log.maybeHandleIOException(Log.scala:2316)
>  at kafka.log.Log.deleteSegments(Log.scala:1691)
>  at kafka.log.Log.deleteOldSegments(Log.scala:1686)
>  at kafka.log.Log.deleteRetentionMsBreachedSegments(Log.scala:1763)
>  at kafka.log.Log.deleteOldSegments(Log.scala:1753)
>  at kafka.log.LogManager.$anonfun$cleanupLogs$3(LogManager.scala:982)
>  at kafka.log.LogManager.$anonfun$cleanupLogs$3$adapted(LogManager.scala:979)
>  at scala.collection.immutable.List.foreach(List.scala:305)
>  at kafka.log.LogManager.cleanupLogs(LogManager.scala:979)
>  at kafka.log.LogManager.$anonfun$startup$2(LogManager.scala:403)
>  at kafka.utils.KafkaScheduler.$anonfun$schedule$2(KafkaScheduler.scala:116)
>  at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:65)
>  at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
>  at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
>  at 
> java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
>  at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>  at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>  at java.base/java.lang.Thread.run(Thread.java:830)
>  Suppressed: java.nio.file.FileSystemException: 
> C:\Users\Administrat

[jira] [Comment Edited] (KAFKA-10886) Kafka crashed in windows environment2

2020-12-23 Thread Wenbing Shen (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-10886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17253993#comment-17253993
 ] 

Wenbing Shen edited comment on KAFKA-10886 at 12/23/20, 9:49 AM:
-

After reference kafka-9458, I provided the above patch, which completely solves 
the kafka compatibility problem with Windows, and this is a qualified patch


was (Author: wenbing.shen):
I refer to [kafka-9458|https://issues.apache.org/jira/browse/KAFKA-9458] and 
completely fix the kafka compatibility problem in Windows environment. This is 
a qualified patch.

> Kafka crashed in windows environment2
> -
>
> Key: KAFKA-10886
> URL: https://issues.apache.org/jira/browse/KAFKA-10886
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 2.0.0
> Environment: Windows Server
>Reporter: Wenbing Shen
>Priority: Critical
>  Labels: windows
> Attachments: windows_kafka_full_crash.patch
>
>
> I tried using the 
> [Kafka-9458|https://issues.apache.org/jira/browse/KAFKA-9458] patch to fix 
> the Kafka problem in the Windows environment, but it didn't seem to 
> work.These include restarting the Kafka service causing data to be deleted by 
> mistake, deleting a topic or a partition migration causing a disk to go 
> offline or the broker crashed.
> [2020-12-23 17:26:11,124] ERROR (kafka-request-handler-11 
> kafka.server.LogDirFailureChannel 76) Error while renaming dir for test-1-1 
> in log dir 
> E:\bigdata\works\kafka-2.0.0.5\kafka-2.0.0.5\bin\windows\.\..\..\data01\kafka\log[2020-12-23
>  17:26:11,124] ERROR (kafka-request-handler-11 
> kafka.server.LogDirFailureChannel 76) Error while renaming dir for test-1-1 
> in log dir 
> E:\bigdata\works\kafka-2.0.0.5\kafka-2.0.0.5\bin\windows\.\..\..\data01\kafka\logjava.nio.file.AccessDeniedException:
>  
> E:\bigdata\works\kafka-2.0.0.5\kafka-2.0.0.5\bin\windows\.\..\..\data01\kafka\log\test-1-1
>  -> 
> E:\bigdata\works\kafka-2.0.0.5\kafka-2.0.0.5\bin\windows\.\..\..\data01\kafka\log\test-1-1.7c4fdeca40294cc38aed815b1a8e7663-delete
>  at sun.nio.fs.WindowsException.translateToIOException(Unknown Source) at 
> sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source) at 
> sun.nio.fs.WindowsFileCopy.move(Unknown Source) at 
> sun.nio.fs.WindowsFileSystemProvider.move(Unknown Source) at 
> java.nio.file.Files.move(Unknown Source) at 
> org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:786) at 
> kafka.log.Log$$anonfun$renameDir$1.apply$mcV$sp(Log.scala:689) at 
> kafka.log.Log$$anonfun$renameDir$1.apply(Log.scala:687) at 
> kafka.log.Log$$anonfun$renameDir$1.apply(Log.scala:687) at 
> kafka.log.Log.maybeHandleIOException(Log.scala:1837) at 
> kafka.log.Log.renameDir(Log.scala:687) at 
> kafka.log.LogManager.asyncDelete(LogManager.scala:833) at 
> kafka.cluster.Partition$$anonfun$delete$1.apply(Partition.scala:267) at 
> kafka.cluster.Partition$$anonfun$delete$1.apply(Partition.scala:262) at 
> kafka.utils.CoreUtils$.inLock(CoreUtils.scala:256) at 
> kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:264) at 
> kafka.cluster.Partition.delete(Partition.scala:262) at 
> kafka.server.ReplicaManager.stopReplica(ReplicaManager.scala:341) at 
> kafka.server.ReplicaManager$$anonfun$stopReplicas$2.apply(ReplicaManager.scala:371)
>  at 
> kafka.server.ReplicaManager$$anonfun$stopReplicas$2.apply(ReplicaManager.scala:369)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
> kafka.server.ReplicaManager.stopReplicas(ReplicaManager.scala:369) at 
> kafka.server.KafkaApis.handleStopReplicaRequest(KafkaApis.scala:222) at 
> kafka.server.KafkaApis.handle(KafkaApis.scala:113) at 
> kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:73) at 
> java.lang.Thread.run(Unknown Source) Suppressed: 
> java.nio.file.AccessDeniedException: 
> E:\bigdata\works\kafka-2.0.0.5\kafka-2.0.0.5\bin\windows\.\..\..\data01\kafka\log\test-1-1
>  -> 
> E:\bigdata\works\kafka-2.0.0.5\kafka-2.0.0.5\bin\windows\.\..\..\data01\kafka\log\test-1-1.7c4fdeca40294cc38aed815b1a8e7663-delete
>  at sun.nio.fs.WindowsException.translateToIOException(Unknown Source) at 
> sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source) at 
> sun.nio.fs.WindowsFileCopy.move(Unknown Source) at 
> sun.nio.fs.WindowsFileSystemProvider.move(Unknown Source) at 
> java.nio.file.Files.move(Unknown Source) at 
> org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:783) 
> ... 23 more[2020-12-23 17:26:11,127] INFO (LogDirFailureHandler 
> kafka.server.ReplicaManager 66) [ReplicaManager 

[jira] [Updated] (KAFKA-10886) Kafka crashed in windows environment2

2020-12-23 Thread Wenbing Shen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenbing Shen updated KAFKA-10886:
-
External issue URL:   (was: 
https://issues.apache.org/jira/browse/KAFKA-9458)

> Kafka crashed in windows environment2
> -
>
> Key: KAFKA-10886
> URL: https://issues.apache.org/jira/browse/KAFKA-10886
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 2.0.0
> Environment: Windows Server
>Reporter: Wenbing Shen
>Priority: Critical
>  Labels: windows
>
> I tried using the Kafka-9458 patch to fix the Kafka problem in the Windows 
> environment, but it didn't seem to work.These include restarting the Kafka 
> service causing data to be deleted by mistake, deleting a topic or a 
> partition migration causing a disk to go offline or the broker crashed.
> [2020-12-23 17:26:11,124] ERROR (kafka-request-handler-11 
> kafka.server.LogDirFailureChannel 76) Error while renaming dir for test-1-1 
> in log dir 
> E:\bigdata\works\kafka-2.0.0.5\kafka-2.0.0.5\bin\windows\.\..\..\data01\kafka\log[2020-12-23
>  17:26:11,124] ERROR (kafka-request-handler-11 
> kafka.server.LogDirFailureChannel 76) Error while renaming dir for test-1-1 
> in log dir 
> E:\bigdata\works\kafka-2.0.0.5\kafka-2.0.0.5\bin\windows\.\..\..\data01\kafka\logjava.nio.file.AccessDeniedException:
>  
> E:\bigdata\works\kafka-2.0.0.5\kafka-2.0.0.5\bin\windows\.\..\..\data01\kafka\log\test-1-1
>  -> 
> E:\bigdata\works\kafka-2.0.0.5\kafka-2.0.0.5\bin\windows\.\..\..\data01\kafka\log\test-1-1.7c4fdeca40294cc38aed815b1a8e7663-delete
>  at sun.nio.fs.WindowsException.translateToIOException(Unknown Source) at 
> sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source) at 
> sun.nio.fs.WindowsFileCopy.move(Unknown Source) at 
> sun.nio.fs.WindowsFileSystemProvider.move(Unknown Source) at 
> java.nio.file.Files.move(Unknown Source) at 
> org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:786) at 
> kafka.log.Log$$anonfun$renameDir$1.apply$mcV$sp(Log.scala:689) at 
> kafka.log.Log$$anonfun$renameDir$1.apply(Log.scala:687) at 
> kafka.log.Log$$anonfun$renameDir$1.apply(Log.scala:687) at 
> kafka.log.Log.maybeHandleIOException(Log.scala:1837) at 
> kafka.log.Log.renameDir(Log.scala:687) at 
> kafka.log.LogManager.asyncDelete(LogManager.scala:833) at 
> kafka.cluster.Partition$$anonfun$delete$1.apply(Partition.scala:267) at 
> kafka.cluster.Partition$$anonfun$delete$1.apply(Partition.scala:262) at 
> kafka.utils.CoreUtils$.inLock(CoreUtils.scala:256) at 
> kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:264) at 
> kafka.cluster.Partition.delete(Partition.scala:262) at 
> kafka.server.ReplicaManager.stopReplica(ReplicaManager.scala:341) at 
> kafka.server.ReplicaManager$$anonfun$stopReplicas$2.apply(ReplicaManager.scala:371)
>  at 
> kafka.server.ReplicaManager$$anonfun$stopReplicas$2.apply(ReplicaManager.scala:369)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
> kafka.server.ReplicaManager.stopReplicas(ReplicaManager.scala:369) at 
> kafka.server.KafkaApis.handleStopReplicaRequest(KafkaApis.scala:222) at 
> kafka.server.KafkaApis.handle(KafkaApis.scala:113) at 
> kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:73) at 
> java.lang.Thread.run(Unknown Source) Suppressed: 
> java.nio.file.AccessDeniedException: 
> E:\bigdata\works\kafka-2.0.0.5\kafka-2.0.0.5\bin\windows\.\..\..\data01\kafka\log\test-1-1
>  -> 
> E:\bigdata\works\kafka-2.0.0.5\kafka-2.0.0.5\bin\windows\.\..\..\data01\kafka\log\test-1-1.7c4fdeca40294cc38aed815b1a8e7663-delete
>  at sun.nio.fs.WindowsException.translateToIOException(Unknown Source) at 
> sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source) at 
> sun.nio.fs.WindowsFileCopy.move(Unknown Source) at 
> sun.nio.fs.WindowsFileSystemProvider.move(Unknown Source) at 
> java.nio.file.Files.move(Unknown Source) at 
> org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:783) 
> ... 23 more[2020-12-23 17:26:11,127] INFO (LogDirFailureHandler 
> kafka.server.ReplicaManager 66) [ReplicaManager broker=1002] Stopping serving 
> replicas in dir 
> E:\bigdata\works\kafka-2.0.0.5\kafka-2.0.0.5\bin\windows\.\..\..\data01\kafka\log



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-10886) Kafka crashed in windows environment2

2020-12-23 Thread Wenbing Shen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenbing Shen updated KAFKA-10886:
-
Description: 
I tried using the [Kafka-9458|https://issues.apache.org/jira/browse/KAFKA-9458] 
patch to fix the Kafka problem in the Windows environment, but it didn't seem 
to work.These include restarting the Kafka service causing data to be deleted 
by mistake, deleting a topic or a partition migration causing a disk to go 
offline or the broker crashed.

[2020-12-23 17:26:11,124] ERROR (kafka-request-handler-11 
kafka.server.LogDirFailureChannel 76) Error while renaming dir for test-1-1 in 
log dir 
E:\bigdata\works\kafka-2.0.0.5\kafka-2.0.0.5\bin\windows\.\..\..\data01\kafka\log[2020-12-23
 17:26:11,124] ERROR (kafka-request-handler-11 
kafka.server.LogDirFailureChannel 76) Error while renaming dir for test-1-1 in 
log dir 
E:\bigdata\works\kafka-2.0.0.5\kafka-2.0.0.5\bin\windows\.\..\..\data01\kafka\logjava.nio.file.AccessDeniedException:
 
E:\bigdata\works\kafka-2.0.0.5\kafka-2.0.0.5\bin\windows\.\..\..\data01\kafka\log\test-1-1
 -> 
E:\bigdata\works\kafka-2.0.0.5\kafka-2.0.0.5\bin\windows\.\..\..\data01\kafka\log\test-1-1.7c4fdeca40294cc38aed815b1a8e7663-delete
 at sun.nio.fs.WindowsException.translateToIOException(Unknown Source) at 
sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source) at 
sun.nio.fs.WindowsFileCopy.move(Unknown Source) at 
sun.nio.fs.WindowsFileSystemProvider.move(Unknown Source) at 
java.nio.file.Files.move(Unknown Source) at 
org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:786) at 
kafka.log.Log$$anonfun$renameDir$1.apply$mcV$sp(Log.scala:689) at 
kafka.log.Log$$anonfun$renameDir$1.apply(Log.scala:687) at 
kafka.log.Log$$anonfun$renameDir$1.apply(Log.scala:687) at 
kafka.log.Log.maybeHandleIOException(Log.scala:1837) at 
kafka.log.Log.renameDir(Log.scala:687) at 
kafka.log.LogManager.asyncDelete(LogManager.scala:833) at 
kafka.cluster.Partition$$anonfun$delete$1.apply(Partition.scala:267) at 
kafka.cluster.Partition$$anonfun$delete$1.apply(Partition.scala:262) at 
kafka.utils.CoreUtils$.inLock(CoreUtils.scala:256) at 
kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:264) at 
kafka.cluster.Partition.delete(Partition.scala:262) at 
kafka.server.ReplicaManager.stopReplica(ReplicaManager.scala:341) at 
kafka.server.ReplicaManager$$anonfun$stopReplicas$2.apply(ReplicaManager.scala:371)
 at 
kafka.server.ReplicaManager$$anonfun$stopReplicas$2.apply(ReplicaManager.scala:369)
 at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
kafka.server.ReplicaManager.stopReplicas(ReplicaManager.scala:369) at 
kafka.server.KafkaApis.handleStopReplicaRequest(KafkaApis.scala:222) at 
kafka.server.KafkaApis.handle(KafkaApis.scala:113) at 
kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:73) at 
java.lang.Thread.run(Unknown Source) Suppressed: 
java.nio.file.AccessDeniedException: 
E:\bigdata\works\kafka-2.0.0.5\kafka-2.0.0.5\bin\windows\.\..\..\data01\kafka\log\test-1-1
 -> 
E:\bigdata\works\kafka-2.0.0.5\kafka-2.0.0.5\bin\windows\.\..\..\data01\kafka\log\test-1-1.7c4fdeca40294cc38aed815b1a8e7663-delete
 at sun.nio.fs.WindowsException.translateToIOException(Unknown Source) at 
sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source) at 
sun.nio.fs.WindowsFileCopy.move(Unknown Source) at 
sun.nio.fs.WindowsFileSystemProvider.move(Unknown Source) at 
java.nio.file.Files.move(Unknown Source) at 
org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:783) ... 
23 more[2020-12-23 17:26:11,127] INFO (LogDirFailureHandler 
kafka.server.ReplicaManager 66) [ReplicaManager broker=1002] Stopping serving 
replicas in dir 
E:\bigdata\works\kafka-2.0.0.5\kafka-2.0.0.5\bin\windows\.\..\..\data01\kafka\log

  was:
I tried using the Kafka-9458 patch to fix the Kafka problem in the Windows 
environment, but it didn't seem to work.These include restarting the Kafka 
service causing data to be deleted by mistake, deleting a topic or a partition 
migration causing a disk to go offline or the broker crashed.

[2020-12-23 17:26:11,124] ERROR (kafka-request-handler-11 
kafka.server.LogDirFailureChannel 76) Error while renaming dir for test-1-1 in 
log dir 
E:\bigdata\works\kafka-2.0.0.5\kafka-2.0.0.5\bin\windows\.\..\..\data01\kafka\log[2020-12-23
 17:26:11,124] ERROR (kafka-request-handler-11 
kafka.server.LogDirFailureChannel 76) Error while renaming dir for test-1-1 in 
log dir 
E:\bigdata\works\kafka-2.0.0.5\kafka-2.0.0.5\bin\windows\.\..\..\data01\kafka\logjava.nio.file.AccessDeniedException:
 
E:\bigdata\works\kafka-2.0.0.5\kafka-2.0.0.5\bin\windows\.\..\..\data01\kafka\log\test-1-1
 -> 
E:\bigdata\works\kafka-2.0.0.5\kafka-2.0.0.5\bin

[jira] [Created] (KAFKA-10886) Kafka crashed in windows environment2

2020-12-23 Thread Wenbing Shen (Jira)
Wenbing Shen created KAFKA-10886:


 Summary: Kafka crashed in windows environment2
 Key: KAFKA-10886
 URL: https://issues.apache.org/jira/browse/KAFKA-10886
 Project: Kafka
  Issue Type: Bug
  Components: log
Affects Versions: 2.0.0
 Environment: Windows Server
Reporter: Wenbing Shen


I tried using the Kafka-9458 patch to fix the Kafka problem in the Windows 
environment, but it didn't seem to work.These include restarting the Kafka 
service causing data to be deleted by mistake, deleting a topic or a partition 
migration causing a disk to go offline or the broker crashed.

[2020-12-23 17:26:11,124] ERROR (kafka-request-handler-11 
kafka.server.LogDirFailureChannel 76) Error while renaming dir for test-1-1 in 
log dir 
E:\bigdata\works\kafka-2.0.0.5\kafka-2.0.0.5\bin\windows\.\..\..\data01\kafka\log[2020-12-23
 17:26:11,124] ERROR (kafka-request-handler-11 
kafka.server.LogDirFailureChannel 76) Error while renaming dir for test-1-1 in 
log dir 
E:\bigdata\works\kafka-2.0.0.5\kafka-2.0.0.5\bin\windows\.\..\..\data01\kafka\logjava.nio.file.AccessDeniedException:
 
E:\bigdata\works\kafka-2.0.0.5\kafka-2.0.0.5\bin\windows\.\..\..\data01\kafka\log\test-1-1
 -> 
E:\bigdata\works\kafka-2.0.0.5\kafka-2.0.0.5\bin\windows\.\..\..\data01\kafka\log\test-1-1.7c4fdeca40294cc38aed815b1a8e7663-delete
 at sun.nio.fs.WindowsException.translateToIOException(Unknown Source) at 
sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source) at 
sun.nio.fs.WindowsFileCopy.move(Unknown Source) at 
sun.nio.fs.WindowsFileSystemProvider.move(Unknown Source) at 
java.nio.file.Files.move(Unknown Source) at 
org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:786) at 
kafka.log.Log$$anonfun$renameDir$1.apply$mcV$sp(Log.scala:689) at 
kafka.log.Log$$anonfun$renameDir$1.apply(Log.scala:687) at 
kafka.log.Log$$anonfun$renameDir$1.apply(Log.scala:687) at 
kafka.log.Log.maybeHandleIOException(Log.scala:1837) at 
kafka.log.Log.renameDir(Log.scala:687) at 
kafka.log.LogManager.asyncDelete(LogManager.scala:833) at 
kafka.cluster.Partition$$anonfun$delete$1.apply(Partition.scala:267) at 
kafka.cluster.Partition$$anonfun$delete$1.apply(Partition.scala:262) at 
kafka.utils.CoreUtils$.inLock(CoreUtils.scala:256) at 
kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:264) at 
kafka.cluster.Partition.delete(Partition.scala:262) at 
kafka.server.ReplicaManager.stopReplica(ReplicaManager.scala:341) at 
kafka.server.ReplicaManager$$anonfun$stopReplicas$2.apply(ReplicaManager.scala:371)
 at 
kafka.server.ReplicaManager$$anonfun$stopReplicas$2.apply(ReplicaManager.scala:369)
 at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
kafka.server.ReplicaManager.stopReplicas(ReplicaManager.scala:369) at 
kafka.server.KafkaApis.handleStopReplicaRequest(KafkaApis.scala:222) at 
kafka.server.KafkaApis.handle(KafkaApis.scala:113) at 
kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:73) at 
java.lang.Thread.run(Unknown Source) Suppressed: 
java.nio.file.AccessDeniedException: 
E:\bigdata\works\kafka-2.0.0.5\kafka-2.0.0.5\bin\windows\.\..\..\data01\kafka\log\test-1-1
 -> 
E:\bigdata\works\kafka-2.0.0.5\kafka-2.0.0.5\bin\windows\.\..\..\data01\kafka\log\test-1-1.7c4fdeca40294cc38aed815b1a8e7663-delete
 at sun.nio.fs.WindowsException.translateToIOException(Unknown Source) at 
sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source) at 
sun.nio.fs.WindowsFileCopy.move(Unknown Source) at 
sun.nio.fs.WindowsFileSystemProvider.move(Unknown Source) at 
java.nio.file.Files.move(Unknown Source) at 
org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:783) ... 
23 more[2020-12-23 17:26:11,127] INFO (LogDirFailureHandler 
kafka.server.ReplicaManager 66) [ReplicaManager broker=1002] Stopping serving 
replicas in dir 
E:\bigdata\works\kafka-2.0.0.5\kafka-2.0.0.5\bin\windows\.\..\..\data01\kafka\log



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka] chia7712 edited a comment on pull request #9520: MINOR: replace test "expected" parameter by assertThrows

2020-12-23 Thread GitBox


chia7712 edited a comment on pull request #9520:
URL: https://github.com/apache/kafka/pull/9520#issuecomment-750005283


   @ijuma the latest commit replaces all ExpectedException or @Test(expected = 
Exception.class) by assertThrows. It seems to me we can get closer to junit 5 
if this PR is got into trunk. Furthermore, I'd like to file following PR to 
remove all junit 4 code.
   
   1. Replace ScalaTest by junit
   2. Replace junit 4 APIs by Junit 5 for each module
   1. rewrite parameter tests
   2. replace ```org.junit``` by ```org.junit.jupiter.api```
   3. replace ```IntegrationTest``` by tag ```Integration```
   4. replace ```useJUnitPlatform``` by ```useJUnit``` 
   5. remove ```testCompile libs.junitVintageEngine```
   
   WDYT?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] chia7712 commented on pull request #9520: MINOR: replace test "expected" parameter by assertThrows

2020-12-23 Thread GitBox


chia7712 commented on pull request #9520:
URL: https://github.com/apache/kafka/pull/9520#issuecomment-750005283


   @ijuma the latest commit replaces all ExpectedException or @Test(expected = 
Exception.class) by assertThrows. It seems to me we can get closer to junit 5 
if this PR is got into trunk. Furthermore, I'd like to file following PR to 
remove all junit 4 code.
   
   1. Replace ScalaTest by junit
   2. Replace junit 4 APIs by Junit 5 for each module
   1. rewrite parameter tests
   2. replace ```org.junit``` by ```org.junit.jupiter.api```
   3. replace ```IntegrationTest``` by tag ```Integration```
   4. replace ```useJUnitPlatform``` by ```useJUnit``` 
   
   WDYT?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (KAFKA-10879) ReplicaFetcherThread crash when cluster doing reassign

2020-12-23 Thread zhifeng.peng (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhifeng.peng updated KAFKA-10879:
-
Description: 
[2020-12-21 12:01:28,110] ERROR [ReplicaFetcher replicaId=3, leaderId=6, 
fetcherId=9] Error due to (kafka.server.ReplicaFetcherThread)
 java.lang.NullPointerException
 [2020-12-21 12:01:28,110] INFO [ReplicaFetcher replicaId=3, leaderId=6, 
fetcherId=9] Stopped (kafka.server.ReplicaFetcherThread)

During reassign partition, one partition was stucked. I find one 
ReplicaFetcherThread has stoped. It resulting in some partition 
under-Replicated. I have attach jstack infomation.

  was:
[2020-12-21 12:01:28,110] ERROR [ReplicaFetcher replicaId=3, leaderId=6, 
fetcherId=9] Error due to (kafka.server.ReplicaFetcherThread)
 java.lang.NullPointerException
 [2020-12-21 12:01:28,110] INFO [ReplicaFetcher replicaId=3, leaderId=6, 
fetcherId=9] Stopped (kafka.server.ReplicaFetcherThread)

During reassign partition, one partition was stucked. I find one 
ReplicaFetcherThread has stoped. It resulting in some partition 
under-Replicated. 


> ReplicaFetcherThread crash when cluster doing reassign
> --
>
> Key: KAFKA-10879
> URL: https://issues.apache.org/jira/browse/KAFKA-10879
> Project: Kafka
>  Issue Type: Bug
>  Components: replication
>Affects Versions: 2.1.1
>Reporter: zhifeng.peng
>Priority: Major
> Attachments: image-2020-12-22-16-01-34-328.png, kafka.jstack
>
>
> [2020-12-21 12:01:28,110] ERROR [ReplicaFetcher replicaId=3, leaderId=6, 
> fetcherId=9] Error due to (kafka.server.ReplicaFetcherThread)
>  java.lang.NullPointerException
>  [2020-12-21 12:01:28,110] INFO [ReplicaFetcher replicaId=3, leaderId=6, 
> fetcherId=9] Stopped (kafka.server.ReplicaFetcherThread)
> During reassign partition, one partition was stucked. I find one 
> ReplicaFetcherThread has stoped. It resulting in some partition 
> under-Replicated. I have attach jstack infomation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-10879) ReplicaFetcherThread crash when cluster doing reassign

2020-12-23 Thread zhifeng.peng (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhifeng.peng updated KAFKA-10879:
-
Attachment: kafka.jstack

> ReplicaFetcherThread crash when cluster doing reassign
> --
>
> Key: KAFKA-10879
> URL: https://issues.apache.org/jira/browse/KAFKA-10879
> Project: Kafka
>  Issue Type: Bug
>  Components: replication
>Affects Versions: 2.1.1
>Reporter: zhifeng.peng
>Priority: Major
> Attachments: image-2020-12-22-16-01-34-328.png, kafka.jstack
>
>
> [2020-12-21 12:01:28,110] ERROR [ReplicaFetcher replicaId=3, leaderId=6, 
> fetcherId=9] Error due to (kafka.server.ReplicaFetcherThread)
>  java.lang.NullPointerException
>  [2020-12-21 12:01:28,110] INFO [ReplicaFetcher replicaId=3, leaderId=6, 
> fetcherId=9] Stopped (kafka.server.ReplicaFetcherThread)
> During reassign partition, one partition was stucked. I find one 
> ReplicaFetcherThread has stoped. It resulting in some partition 
> under-Replicated. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)