[jira] [Commented] (KAFKA-14287) Multi noded with kraft combined mode will fail shutdown

2022-10-14 Thread Luke Chen (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17617521#comment-17617521
 ] 

Luke Chen commented on KAFKA-14287:
---

Thanks for the reply. But I don't know how we can fix this issue gracefully. 
Here are my thoughts:
 # Provide 2 phase shutdown for combined node: first shutdown for broker, and 
2nd shutdown for controller. So we can shutdown all brokers in the cluster, and 
then controllers
 # Document the known issue, and ask users not try to run combined nodes more 
than quorum size, ex: 1 combined node + 2 controller nodes, or 2 combined nodes 
+ 3 controller nodes...?

 

cc [~cmccabe] 

> Multi noded with kraft combined mode will fail shutdown
> ---
>
> Key: KAFKA-14287
> URL: https://issues.apache.org/jira/browse/KAFKA-14287
> Project: Kafka
>  Issue Type: Bug
>  Components: kraft
>Affects Versions: 3.3.1
>Reporter: Luke Chen
>Assignee: Luke Chen
>Priority: Major
>
> Multiple nodes with kraft combined mode (i.e. 
> process.roles='broker,controller') can startup successfully. When shutdown in 
> combined mode, we'll unfence broker first. When the remaining controller 
> nodes are less than quorum size (i.e. N / 2 + 1), the unfence record will not 
> get committed to metadata topic successfully. So the broker will keep waiting 
> for the shutdown granting response and then timeout error:
>  
> {code:java}
> 2022-10-11 18:01:14,341] ERROR [kafka-raft-io-thread]: Graceful shutdown of 
> RaftClient failed (kafka.raft.KafkaRaftManager$RaftIoThread)
> java.util.concurrent.TimeoutException: Timeout expired before graceful 
> shutdown completed
>     at 
> org.apache.kafka.raft.KafkaRaftClient$GracefulShutdown.failWithTimeout(KafkaRaftClient.java:2408)
>     at 
> org.apache.kafka.raft.KafkaRaftClient.maybeCompleteShutdown(KafkaRaftClient.java:2163)
>     at org.apache.kafka.raft.KafkaRaftClient.poll(KafkaRaftClient.java:2230)
>     at kafka.raft.KafkaRaftManager$RaftIoThread.doWork(RaftManager.scala:52)
>     at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96)
>  {code}
>  
>  
> to reproduce:
>  # start up 2 kraft combines nodes, so we need 2 nodes get quorum
>  # shutdown any one node, in this time, it will shutdown successfully because 
> when broker shutdown, the 2 controllers are all alive, so broker can be 
> granted for shutdown
>  # shutdown 2nd node, this time, the shutdown will be pending because we only 
> have 1 controller left, and in the end, timeout error.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] lucasbru commented on a diff in pull request #12743: KAFKA-14299: Fix incorrect pauses in separate state restoration

2022-10-14 Thread GitBox


lucasbru commented on code in PR #12743:
URL: https://github.com/apache/kafka/pull/12743#discussion_r995065939


##
streams/src/test/java/org/apache/kafka/streams/processor/internals/TaskManagerTest.java:
##
@@ -1481,6 +1481,36 @@ public void 
shouldTryToLockValidTaskDirsAtRebalanceStart() throws Exception {
 assertThat(taskManager.lockedTaskDirectories(), 
is(singleton(taskId01)));
 }
 
+@Test
+public void shouldPauseAllTopicsWithoutStateUpdaterOnRebalanceComplete() {
+final Set assigned = mkSet(t1p0, t1p1);
+expect(consumer.assignment()).andReturn(assigned);
+consumer.pause(assigned);
+expectLastCall();

Review Comment:
   Done



##
streams/src/test/java/org/apache/kafka/streams/processor/internals/TaskManagerTest.java:
##
@@ -1481,6 +1481,36 @@ public void 
shouldTryToLockValidTaskDirsAtRebalanceStart() throws Exception {
 assertThat(taskManager.lockedTaskDirectories(), 
is(singleton(taskId01)));
 }
 
+@Test
+public void shouldPauseAllTopicsWithoutStateUpdaterOnRebalanceComplete() {
+final Set assigned = mkSet(t1p0, t1p1);
+expect(consumer.assignment()).andReturn(assigned);
+consumer.pause(assigned);
+expectLastCall();
+replay(consumer);
+taskManager.handleRebalanceComplete();
+verify(consumer);
+}
+
+@Test
+public void shouldNotPauseReadyTasksWithStateUpdaterOnRebalanceComplete() {
+taskManager = 
setUpTaskManager(StreamsConfigUtils.ProcessingMode.AT_LEAST_ONCE, true);
+final StreamTask statefulTask0 = statefulTask(taskId00, 
taskId00ChangelogPartitions)
+.inState(State.RESTORING)

Review Comment:
   Good point! That was a careless copy/paste from another test.



##
streams/src/test/java/org/apache/kafka/streams/processor/internals/TaskManagerTest.java:
##
@@ -1481,6 +1481,36 @@ public void 
shouldTryToLockValidTaskDirsAtRebalanceStart() throws Exception {
 assertThat(taskManager.lockedTaskDirectories(), 
is(singleton(taskId01)));
 }
 
+@Test
+public void shouldPauseAllTopicsWithoutStateUpdaterOnRebalanceComplete() {
+final Set assigned = mkSet(t1p0, t1p1);
+expect(consumer.assignment()).andReturn(assigned);
+consumer.pause(assigned);
+expectLastCall();
+replay(consumer);
+taskManager.handleRebalanceComplete();
+verify(consumer);
+}
+
+@Test
+public void shouldNotPauseReadyTasksWithStateUpdaterOnRebalanceComplete() {
+taskManager = 
setUpTaskManager(StreamsConfigUtils.ProcessingMode.AT_LEAST_ONCE, true);
+final StreamTask statefulTask0 = statefulTask(taskId00, 
taskId00ChangelogPartitions)
+.inState(State.RESTORING)
+.withInputPartitions(taskId00Partitions).build();
+taskManager.addTask(statefulTask0);

Review Comment:
   Done. But out of interest, why not create a Tasks as it seems to be 
suppoorted by setUpTaskManager?



##
streams/src/test/java/org/apache/kafka/streams/processor/internals/TaskManagerTest.java:
##
@@ -1481,6 +1481,36 @@ public void 
shouldTryToLockValidTaskDirsAtRebalanceStart() throws Exception {
 assertThat(taskManager.lockedTaskDirectories(), 
is(singleton(taskId01)));
 }
 
+@Test
+public void shouldPauseAllTopicsWithoutStateUpdaterOnRebalanceComplete() {
+final Set assigned = mkSet(t1p0, t1p1);
+expect(consumer.assignment()).andReturn(assigned);
+consumer.pause(assigned);
+expectLastCall();
+replay(consumer);
+taskManager.handleRebalanceComplete();
+verify(consumer);
+}
+
+@Test
+public void shouldNotPauseReadyTasksWithStateUpdaterOnRebalanceComplete() {
+taskManager = 
setUpTaskManager(StreamsConfigUtils.ProcessingMode.AT_LEAST_ONCE, true);
+final StreamTask statefulTask0 = statefulTask(taskId00, 
taskId00ChangelogPartitions)
+.inState(State.RESTORING)
+.withInputPartitions(taskId00Partitions).build();
+taskManager.addTask(statefulTask0);
+final Set assigned = mkSet(t1p0, t1p1);
+
+expect(consumer.assignment()).andReturn(assigned);
+consumer.pause(assigned);
+expectLastCall();

Review Comment:
   Done



##
streams/src/test/java/org/apache/kafka/streams/processor/internals/TaskManagerTest.java:
##
@@ -1481,6 +1481,36 @@ public void 
shouldTryToLockValidTaskDirsAtRebalanceStart() throws Exception {
 assertThat(taskManager.lockedTaskDirectories(), 
is(singleton(taskId01)));
 }
 
+@Test
+public void shouldPauseAllTopicsWithoutStateUpdaterOnRebalanceComplete() {
+final Set assigned = mkSet(t1p0, t1p1);
+expect(consumer.assignment()).andReturn(assigned);
+consumer.pause(assigned);
+expectLastCall();
+replay(consumer);
+taskManager.handleRebalanceComplete();
+veri

[GitHub] [kafka] cadonna commented on a diff in pull request #12743: KAFKA-14299: Fix incorrect pauses in separate state restoration

2022-10-14 Thread GitBox


cadonna commented on code in PR #12743:
URL: https://github.com/apache/kafka/pull/12743#discussion_r995452727


##
streams/src/test/java/org/apache/kafka/streams/processor/internals/TaskManagerTest.java:
##
@@ -1481,6 +1481,36 @@ public void 
shouldTryToLockValidTaskDirsAtRebalanceStart() throws Exception {
 assertThat(taskManager.lockedTaskDirectories(), 
is(singleton(taskId01)));
 }
 
+@Test
+public void shouldPauseAllTopicsWithoutStateUpdaterOnRebalanceComplete() {
+final Set assigned = mkSet(t1p0, t1p1);
+expect(consumer.assignment()).andReturn(assigned);
+consumer.pause(assigned);
+expectLastCall();
+replay(consumer);
+taskManager.handleRebalanceComplete();
+verify(consumer);
+}
+
+@Test
+public void shouldNotPauseReadyTasksWithStateUpdaterOnRebalanceComplete() {
+taskManager = 
setUpTaskManager(StreamsConfigUtils.ProcessingMode.AT_LEAST_ONCE, true);
+final StreamTask statefulTask0 = statefulTask(taskId00, 
taskId00ChangelogPartitions)
+.inState(State.RESTORING)
+.withInputPartitions(taskId00Partitions).build();
+taskManager.addTask(statefulTask0);

Review Comment:
   The reason is that `TaskManager#addTask()` was added just for testing (we 
have a few other in the code base). IMO, it is not a clean way to design 
classes, because people might just use those methods in production code despite 
the comment on those methods which might lead to errors. Much cleaner and safer 
is to set the state of the task manager with the `TasksRegistry` mock. 
   Actually, we should remove `TaskManager#addTask()` and refactor 
`TaskManagerTest`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] cadonna commented on a diff in pull request #12743: KAFKA-14299: Fix incorrect pauses in separate state restoration

2022-10-14 Thread GitBox


cadonna commented on code in PR #12743:
URL: https://github.com/apache/kafka/pull/12743#discussion_r995454259


##
streams/src/test/java/org/apache/kafka/streams/integration/SmokeTestDriverIntegrationTest.java:
##
@@ -42,18 +43,18 @@
 import static org.apache.kafka.streams.tests.SmokeTestDriver.generate;
 import static org.apache.kafka.streams.tests.SmokeTestDriver.verify;
 
-@Category(IntegrationTest.class)
+@Timeout(600)
+@Tag("integration")

Review Comment:
   Nice! You also made the migration to JUnit 5.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] cadonna commented on a diff in pull request #12465: KAFKA-12950: Replace EasyMock and PowerMock with Mockito for KafkaStreamsTest

2022-10-14 Thread GitBox


cadonna commented on code in PR #12465:
URL: https://github.com/apache/kafka/pull/12465#discussion_r995461375


##
streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java:
##
@@ -1096,7 +1096,7 @@ private Optional removeStreamThread(final long 
timeoutMs) throws Timeout
 // make a copy of threads to avoid holding lock
 for (final StreamThread streamThread : new 
ArrayList<>(threads)) {
 final boolean callingThreadIsNotCurrentStreamThread = 
!streamThread.getName().equals(Thread.currentThread().getName());
-if (streamThread.isAlive() && 
(callingThreadIsNotCurrentStreamThread || getNumLiveStreamThreads() == 1)) {
+if (callingThreadIsNotCurrentStreamThread || 
getNumLiveStreamThreads() == 1) {

Review Comment:
   Wouldn't it be possible to just call `streamThread.state().isAlive()`?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] showuon opened a new pull request, #12748: KAFKA-13715: add generationId field in subscription

2022-10-14 Thread GitBox


showuon opened a new pull request, #12748:
URL: https://github.com/apache/kafka/pull/12748

   KIP-792
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] conradlco commented on pull request #12688: MINOR: Delete config/kraft/README.md

2022-10-14 Thread GitBox


conradlco commented on PR #12688:
URL: https://github.com/apache/kafka/pull/12688#issuecomment-1278739819

   @ijuma - hi Ismael, I just went to the main branch readme: 
https://github.com/apache/kafka#readme and this now has a broken link to this 
deleted readme, cheers.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] showuon commented on pull request #12688: MINOR: Delete config/kraft/README.md

2022-10-14 Thread GitBox


showuon commented on PR #12688:
URL: https://github.com/apache/kafka/pull/12688#issuecomment-1278745345

   @conradlco , nice find. Are you interested in submitting a PR to fix it?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] shekhar-rajak commented on pull request #12739: Replace EasyMock and PowerMock with Mockito | TimeOrderedCachingPersistentWindowStoreTest

2022-10-14 Thread GitBox


shekhar-rajak commented on PR #12739:
URL: https://github.com/apache/kafka/pull/12739#issuecomment-1278750513

   Not sure why these checks failed. These failures are not looking relevant. 
Please someone help.  


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (KAFKA-14302) Infinite probing rebalance if a changelog topic got emptied

2022-10-14 Thread Damien Gasparina (Jira)
Damien Gasparina created KAFKA-14302:


 Summary: Infinite probing rebalance if a changelog topic got 
emptied
 Key: KAFKA-14302
 URL: https://issues.apache.org/jira/browse/KAFKA-14302
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 3.3.1
Reporter: Damien Gasparina
 Attachments: image-2022-10-14-12-04-01-190.png

If a store, with a changelog topic, has been fully emptied, it could generate 
infinite probing rebalance.

 

The scenario is the following:
 * A Kafka Streams application have a store with a changelog
 * Many entries are pushed into the changelog, thus the Log end Offset is high, 
let's say 20,000
 * Then, the store got emptied, either due to data retention (windowing) or 
tombstone
 * Then an instance of the application is restarted
 * It restores the store from the changelog, but does not write a checkpoint 
file as there are no data pushed at all
 * As there are no checkpoint entries, this instance specify a taskOffsetSums 
with offset set to 0 in the subscriptionUserData
 * The group leader, during the assignment, then compute a lag of 20,000 (end 
offsets - task offset), which is greater than the default acceptable lag, thus 
decide to schedule a probing rebalance
 * In ther next probing rebalance, nothing changed, so... new probing rebalance

 

I was able to reproduce locally with a simple topology:

 
{code:java}
var table = streamsBuilder.stream("table");
streamsBuilder
.stream("stream")
.join(table, (eSt, eTb) -> eSt.toString() + eTb.toString(), 
JoinWindows.of(Duration.ofSeconds(5)))
.to("output");{code}
 

 

 

Due to this issue, application having an empty changelog are experiencing 
frequent rebalance:

!image-2022-10-14-12-04-01-190.png!

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14303) Producer.send without record key and batch.size=0 goes into infinite loop

2022-10-14 Thread Igor Soarez (Jira)
Igor Soarez created KAFKA-14303:
---

 Summary: Producer.send without record key and batch.size=0 goes 
into infinite loop
 Key: KAFKA-14303
 URL: https://issues.apache.org/jira/browse/KAFKA-14303
 Project: Kafka
  Issue Type: Bug
  Components: clients, producer 
Affects Versions: 3.3.1, 3.3.0
Reporter: Igor Soarez


3.3 has broken previous producer behavior.

A call to {{producer.send(record)}} with a record without a key and configured 
with {{batch.size=0}} never returns.

Reproducer:
{code:java}
class ProducerIssueTest extends IntegrationTestHarness {
  override protected def brokerCount = 1
  @Test
  def testProduceWithBatchSizeZeroAndNoRecordKey(): Unit = {
val topicName = "foo"
createTopic(topicName)
val overrides = new Properties
overrides.put(ProducerConfig.BATCH_SIZE_CONFIG, 0)
val producer = createProducer(keySerializer = new StringSerializer, 
valueSerializer = new StringSerializer, overrides)
val record = new ProducerRecord[String, String](topicName, null, "hello")
val future = producer.send(record) // goes into infinite loop here
future.get(10, TimeUnit.SECONDS)
  }
} {code}
 

[Documentation for producer 
configuration|https://kafka.apache.org/documentation/#producerconfigs_batch.size]
 states {{batch.size=0}} as a valid value:
{quote}Valid Values: [0,...]
{quote}
and recommends its use directly:
{quote}A small batch size will make batching less common and may reduce 
throughput (a batch size of zero will disable batching entirely).
{quote}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-14302) Infinite probing rebalance if a changelog topic got emptied

2022-10-14 Thread Damien Gasparina (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Damien Gasparina updated KAFKA-14302:
-
Description: 
If a store, with a changelog topic, has been fully emptied, it could generate 
infinite probing rebalance.

 

The scenario is the following:
 * A Kafka Streams application have a store with a changelog
 * Many entries are pushed into the changelog, thus the Log end Offset is high, 
let's say 20,000
 * Then, the store got emptied, either due to data retention (windowing) or 
tombstone
 * Then an instance of the application is restarted
 * It restores the store from the changelog, but does not write a checkpoint 
file as there are no data pushed at all
 * As there are no checkpoint entries, this instance specify a taskOffsetSums 
with offset set to 0 in the subscriptionUserData
 * The group leader, during the assignment, then compute a lag of 20,000 (end 
offsets - task offset), which is greater than the default acceptable lag, thus 
decide to schedule a probing rebalance
 * In ther next probing rebalance, nothing changed, so... new probing rebalance

 

I was able to reproduce locally with a simple topology:

 
{code:java}
var table = streamsBuilder.stream("table");
streamsBuilder
.stream("stream")
.join(table, (eSt, eTb) -> eSt.toString() + eTb.toString(), 
JoinWindows.of(Duration.ofSeconds(5)))
.to("output");{code}
 

 

 

Due to this issue, application having an empty changelog are experiencing 
frequent rebalance:

!image-2022-10-14-12-04-01-190.png!

 

With assignments similar to:
{code:java}
[hello-world-8323d214-4c56-470f-bace-e4291cdf10eb-StreamThread-3] INFO 
org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor - 
stream-thread 
[hello-world-8323d214-4c56-470f-bace-e4291cdf10eb-StreamThread-3-consumer] 
Assigned tasks [0_5, 0_4, 0_3, 0_2, 0_1, 0_0] including stateful [0_5, 0_4, 
0_3, 0_2, 0_1, 0_0] to clients as: 
d0e2d556-2587-48e8-b9ab-43a4e8207be6=[activeTasks: ([]) standbyTasks: ([0_0, 
0_1, 0_2, 0_3, 0_4, 0_5])]
8323d214-4c56-470f-bace-e4291cdf10eb=[activeTasks: ([0_0, 0_1, 0_2, 0_3, 0_4, 
0_5]) standbyTasks: ([])].{code}

  was:
If a store, with a changelog topic, has been fully emptied, it could generate 
infinite probing rebalance.

 

The scenario is the following:
 * A Kafka Streams application have a store with a changelog
 * Many entries are pushed into the changelog, thus the Log end Offset is high, 
let's say 20,000
 * Then, the store got emptied, either due to data retention (windowing) or 
tombstone
 * Then an instance of the application is restarted
 * It restores the store from the changelog, but does not write a checkpoint 
file as there are no data pushed at all
 * As there are no checkpoint entries, this instance specify a taskOffsetSums 
with offset set to 0 in the subscriptionUserData
 * The group leader, during the assignment, then compute a lag of 20,000 (end 
offsets - task offset), which is greater than the default acceptable lag, thus 
decide to schedule a probing rebalance
 * In ther next probing rebalance, nothing changed, so... new probing rebalance

 

I was able to reproduce locally with a simple topology:

 
{code:java}
var table = streamsBuilder.stream("table");
streamsBuilder
.stream("stream")
.join(table, (eSt, eTb) -> eSt.toString() + eTb.toString(), 
JoinWindows.of(Duration.ofSeconds(5)))
.to("output");{code}
 

 

 

Due to this issue, application having an empty changelog are experiencing 
frequent rebalance:

!image-2022-10-14-12-04-01-190.png!

 


> Infinite probing rebalance if a changelog topic got emptied
> ---
>
> Key: KAFKA-14302
> URL: https://issues.apache.org/jira/browse/KAFKA-14302
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 3.3.1
>Reporter: Damien Gasparina
>Priority: Major
> Attachments: image-2022-10-14-12-04-01-190.png
>
>
> If a store, with a changelog topic, has been fully emptied, it could generate 
> infinite probing rebalance.
>  
> The scenario is the following:
>  * A Kafka Streams application have a store with a changelog
>  * Many entries are pushed into the changelog, thus the Log end Offset is 
> high, let's say 20,000
>  * Then, the store got emptied, either due to data retention (windowing) or 
> tombstone
>  * Then an instance of the application is restarted
>  * It restores the store from the changelog, but does not write a checkpoint 
> file as there are no data pushed at all
>  * As there are no checkpoint entries, this instance specify a taskOffsetSums 
> with offset set to 0 in the subscriptionUserData
>  * The group leader, during the assignment, then compute a lag of 20,000 (end 
> offsets - task offset), which is greater than the default acceptable lag, 
> thus decide to schedule a probing rebalance

[jira] [Commented] (KAFKA-14303) Producer.send without record key and batch.size=0 goes into infinite loop

2022-10-14 Thread Igor Soarez (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17617666#comment-17617666
 ] 

Igor Soarez commented on KAFKA-14303:
-

The issue can be more specifically replicated with a unit test on 
{{{}RecordAccumulator{}}}, adding this test to {{RecordAccumulatorTest}}
{code:java}
@Test
public void testBatchSizeZero() throws Exception {
int batchSize = 0;
long totalSize = 1024 * 1024;
RecordAccumulator accum = createTestRecordAccumulator(batchSize, totalSize, 
CompressionType.NONE, 10);
accum.append(topic, RecordMetadata.UNKNOWN_PARTITION, 0, key, new byte[32], 
Record.EMPTY_HEADERS,
null, maxBlockTimeMs, false, time.milliseconds(), cluster);
}
{code}

> Producer.send without record key and batch.size=0 goes into infinite loop
> -
>
> Key: KAFKA-14303
> URL: https://issues.apache.org/jira/browse/KAFKA-14303
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, producer 
>Affects Versions: 3.3.0, 3.3.1
>Reporter: Igor Soarez
>Priority: Major
>  Labels: Partitioner, bug, client, producer, regresion
>
> 3.3 has broken previous producer behavior.
> A call to {{producer.send(record)}} with a record without a key and 
> configured with {{batch.size=0}} never returns.
> Reproducer:
> {code:java}
> class ProducerIssueTest extends IntegrationTestHarness {
>   override protected def brokerCount = 1
>   @Test
>   def testProduceWithBatchSizeZeroAndNoRecordKey(): Unit = {
> val topicName = "foo"
> createTopic(topicName)
> val overrides = new Properties
> overrides.put(ProducerConfig.BATCH_SIZE_CONFIG, 0)
> val producer = createProducer(keySerializer = new StringSerializer, 
> valueSerializer = new StringSerializer, overrides)
> val record = new ProducerRecord[String, String](topicName, null, "hello")
> val future = producer.send(record) // goes into infinite loop here
> future.get(10, TimeUnit.SECONDS)
>   }
> } {code}
>  
> [Documentation for producer 
> configuration|https://kafka.apache.org/documentation/#producerconfigs_batch.size]
>  states {{batch.size=0}} as a valid value:
> {quote}Valid Values: [0,...]
> {quote}
> and recommends its use directly:
> {quote}A small batch size will make batching less common and may reduce 
> throughput (a batch size of zero will disable batching entirely).
> {quote}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] vamossagar12 commented on a diff in pull request #12561: KAFKA-12495: Exponential backoff retry to prevent rebalance storms when worker joins after revoking rebalance

2022-10-14 Thread GitBox


vamossagar12 commented on code in PR #12561:
URL: https://github.com/apache/kafka/pull/12561#discussion_r995680008


##
connect/runtime/src/test/java/org/apache/kafka/connect/runtime/distributed/WorkerCoordinatorIncrementalTest.java:
##
@@ -517,13 +517,13 @@ public void testTaskAssignmentWhenWorkerBounces() {
 leaderAssignment = deserializeAssignment(result, leaderId);
 assertAssignment(leaderId, offset,
 Collections.emptyList(), 0,
-Collections.emptyList(), 0,
+Collections.emptyList(), 1,

Review Comment:
   Thanks @C0urante . I have reverted the test with the follow up rebalance I 
was basing my thoughts on the fact that KIP-415 allows having more rebalances 
so adding another one should be ok. Having said that, I am fine with whatever 
you stated as well.
   
   Regarding a separate PR, yes that makes sense. Thanks for your help on this!



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-14303) Producer.send without record key and batch.size=0 goes into infinite loop

2022-10-14 Thread Luke Chen (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17617696#comment-17617696
 ] 

Luke Chen commented on KAFKA-14303:
---

[~soarez] , thanks for reporting the issue. Looks like you've done some 
investigation, are you willing to submit a PR to fix it?

 

> Producer.send without record key and batch.size=0 goes into infinite loop
> -
>
> Key: KAFKA-14303
> URL: https://issues.apache.org/jira/browse/KAFKA-14303
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, producer 
>Affects Versions: 3.3.0, 3.3.1
>Reporter: Igor Soarez
>Priority: Major
>  Labels: Partitioner, bug, client, producer, regresion
>
> 3.3 has broken previous producer behavior.
> A call to {{producer.send(record)}} with a record without a key and 
> configured with {{batch.size=0}} never returns.
> Reproducer:
> {code:java}
> class ProducerIssueTest extends IntegrationTestHarness {
>   override protected def brokerCount = 1
>   @Test
>   def testProduceWithBatchSizeZeroAndNoRecordKey(): Unit = {
> val topicName = "foo"
> createTopic(topicName)
> val overrides = new Properties
> overrides.put(ProducerConfig.BATCH_SIZE_CONFIG, 0)
> val producer = createProducer(keySerializer = new StringSerializer, 
> valueSerializer = new StringSerializer, overrides)
> val record = new ProducerRecord[String, String](topicName, null, "hello")
> val future = producer.send(record) // goes into infinite loop here
> future.get(10, TimeUnit.SECONDS)
>   }
> } {code}
>  
> [Documentation for producer 
> configuration|https://kafka.apache.org/documentation/#producerconfigs_batch.size]
>  states {{batch.size=0}} as a valid value:
> {quote}Valid Values: [0,...]
> {quote}
> and recommends its use directly:
> {quote}A small batch size will make batching less common and may reduce 
> throughput (a batch size of zero will disable batching entirely).
> {quote}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] shekhar-rajak commented on pull request #12739: Replace EasyMock and PowerMock with Mockito | TimeOrderedCachingPersistentWindowStoreTest

2022-10-14 Thread GitBox


shekhar-rajak commented on PR #12739:
URL: https://github.com/apache/kafka/pull/12739#issuecomment-1278946808

   Failing tests seem unrelated 
   
   [
   kafka.admin.ResetConsumerGroupOffsetTest.testResetOffsetsToZonedDateTime, 
   kafka.admin.MetadataQuorumCommandTest.testDescribeQuorumReplicationSuccessful
   ]


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] shekhar-rajak commented on pull request #12735: Replace EasyMock and PowerMock with Mockito in connect/runtime/ErrorHandlingTaskTest | KAFKA-14059

2022-10-14 Thread GitBox


shekhar-rajak commented on PR #12735:
URL: https://github.com/apache/kafka/pull/12735#issuecomment-1278951012

   Failing tests seem unrelated
   
   [
   
org.apache.kafka.connect.mirror.integration.IdentityReplicationIntegrationTest.testReplication()
   KStream
   ]


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] shekhar-rajak commented on pull request #12735: Replace EasyMock and PowerMock with Mockito in connect/runtime/ErrorHandlingTaskTest | KAFKA-14059

2022-10-14 Thread GitBox


shekhar-rajak commented on PR #12735:
URL: https://github.com/apache/kafka/pull/12735#issuecomment-1278952145

   Thanks @divijvaidya for the review. I have updated the PR. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] shekhar-rajak commented on pull request #12739: Replace EasyMock and PowerMock with Mockito | TimeOrderedCachingPersistentWindowStoreTest

2022-10-14 Thread GitBox


shekhar-rajak commented on PR #12739:
URL: https://github.com/apache/kafka/pull/12739#issuecomment-1278966484

   Thanks @guozhangwang for the review. I have updated the PR, please have a 
look. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] shekhar-rajak commented on pull request #12739: Replace EasyMock and PowerMock with Mockito | TimeOrderedCachingPersistentWindowStoreTest

2022-10-14 Thread GitBox


shekhar-rajak commented on PR #12739:
URL: https://github.com/apache/kafka/pull/12739#issuecomment-1278967518

   Ping for the PR review @C0urante  
   Thanks!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-14303) Producer.send without record key and batch.size=0 goes into infinite loop

2022-10-14 Thread Igor Soarez (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17617712#comment-17617712
 ] 

Igor Soarez commented on KAFKA-14303:
-

 [~showuon] Yes, I'm currently investigating and hoping to come up with a PR

> Producer.send without record key and batch.size=0 goes into infinite loop
> -
>
> Key: KAFKA-14303
> URL: https://issues.apache.org/jira/browse/KAFKA-14303
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, producer 
>Affects Versions: 3.3.0, 3.3.1
>Reporter: Igor Soarez
>Priority: Major
>  Labels: Partitioner, bug, client, producer, regresion
>
> 3.3 has broken previous producer behavior.
> A call to {{producer.send(record)}} with a record without a key and 
> configured with {{batch.size=0}} never returns.
> Reproducer:
> {code:java}
> class ProducerIssueTest extends IntegrationTestHarness {
>   override protected def brokerCount = 1
>   @Test
>   def testProduceWithBatchSizeZeroAndNoRecordKey(): Unit = {
> val topicName = "foo"
> createTopic(topicName)
> val overrides = new Properties
> overrides.put(ProducerConfig.BATCH_SIZE_CONFIG, 0)
> val producer = createProducer(keySerializer = new StringSerializer, 
> valueSerializer = new StringSerializer, overrides)
> val record = new ProducerRecord[String, String](topicName, null, "hello")
> val future = producer.send(record) // goes into infinite loop here
> future.get(10, TimeUnit.SECONDS)
>   }
> } {code}
>  
> [Documentation for producer 
> configuration|https://kafka.apache.org/documentation/#producerconfigs_batch.size]
>  states {{batch.size=0}} as a valid value:
> {quote}Valid Values: [0,...]
> {quote}
> and recommends its use directly:
> {quote}A small batch size will make batching less common and may reduce 
> throughput (a batch size of zero will disable batching entirely).
> {quote}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-14303) Producer.send without record key and batch.size=0 goes into infinite loop

2022-10-14 Thread Igor Soarez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Soarez reassigned KAFKA-14303:
---

Assignee: Igor Soarez

> Producer.send without record key and batch.size=0 goes into infinite loop
> -
>
> Key: KAFKA-14303
> URL: https://issues.apache.org/jira/browse/KAFKA-14303
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, producer 
>Affects Versions: 3.3.0, 3.3.1
>Reporter: Igor Soarez
>Assignee: Igor Soarez
>Priority: Major
>  Labels: Partitioner, bug, client, producer, regresion
>
> 3.3 has broken previous producer behavior.
> A call to {{producer.send(record)}} with a record without a key and 
> configured with {{batch.size=0}} never returns.
> Reproducer:
> {code:java}
> class ProducerIssueTest extends IntegrationTestHarness {
>   override protected def brokerCount = 1
>   @Test
>   def testProduceWithBatchSizeZeroAndNoRecordKey(): Unit = {
> val topicName = "foo"
> createTopic(topicName)
> val overrides = new Properties
> overrides.put(ProducerConfig.BATCH_SIZE_CONFIG, 0)
> val producer = createProducer(keySerializer = new StringSerializer, 
> valueSerializer = new StringSerializer, overrides)
> val record = new ProducerRecord[String, String](topicName, null, "hello")
> val future = producer.send(record) // goes into infinite loop here
> future.get(10, TimeUnit.SECONDS)
>   }
> } {code}
>  
> [Documentation for producer 
> configuration|https://kafka.apache.org/documentation/#producerconfigs_batch.size]
>  states {{batch.size=0}} as a valid value:
> {quote}Valid Values: [0,...]
> {quote}
> and recommends its use directly:
> {quote}A small batch size will make batching less common and may reduce 
> throughput (a batch size of zero will disable batching entirely).
> {quote}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-14133) Remaining EasyMock to Mockito tests

2022-10-14 Thread Shekhar Prasad Rajak (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17617714#comment-17617714
 ] 

Shekhar Prasad Rajak commented on KAFKA-14133:
--

Please help in PR review: 

 

* [https://github.com/apache/kafka/pull/12735] 
* [https://github.com/apache/kafka/pull/12739] 

> Remaining EasyMock to Mockito tests
> ---
>
> Key: KAFKA-14133
> URL: https://issues.apache.org/jira/browse/KAFKA-14133
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Christo Lolov
>Assignee: Christo Lolov
>Priority: Major
>
> {color:#de350b}There are tests which use both PowerMock and EasyMock. I have 
> put those in https://issues.apache.org/jira/browse/KAFKA-14132. Tests here 
> rely solely on EasyMock.{color}
> Unless stated in brackets the tests are in the streams module.
> A list of tests which still require to be moved from EasyMock to Mockito as 
> of 2nd of August 2022 which do not have a Jira issue and do not have pull 
> requests I am aware of which are opened:
> {color:#ff8b00}In Review{color}
> {color:#00875a}Merged{color}
>  # {color:#00875a}WorkerConnectorTest{color} (connect) (owner: [~yash.mayya] )
>  # {color:#00875a}WorkerCoordinatorTest{color} (connect) (owner: 
> [~yash.mayya] )
>  # {color:#00875a}RootResourceTest{color} (connect) (owner: [~yash.mayya] )
>  # {color:#00875a}ByteArrayProducerRecordEquals{color} (connect) (owner: 
> [~yash.mayya] )
>  # {color:#00875a}KStreamFlatTransformTest{color} (owner: Christo)
>  # {color:#00875a}KStreamFlatTransformValuesTest{color} (owner: Christo)
>  # {color:#00875a}KStreamPrintTest{color} (owner: Christo)
>  # {color:#00875a}KStreamRepartitionTest{color} (owner: Christo)
>  # {color:#00875a}MaterializedInternalTest{color} (owner: Christo)
>  # {color:#00875a}TransformerSupplierAdapterTest{color} (owner: Christo)
>  # {color:#00875a}KTableSuppressProcessorMetricsTest{color} (owner: Christo)
>  # {color:#00875a}ClientUtilsTest{color} (owner: Christo)
>  # {color:#00875a}HighAvailabilityStreamsPartitionAssignorTest{color} (owner: 
> Christo)
>  # {color:#ff8b00}StreamsRebalanceListenerTest{color} (owner: Christo)
>  # {color:#ff8b00}TimestampedKeyValueStoreMaterializerTest{color} (owner: 
> Christo)
>  # {color:#ff8b00}CachingInMemoryKeyValueStoreTest{color} (owner: Christo)
>  # {color:#ff8b00}CachingInMemorySessionStoreTest{color} (owner: Christo)
>  # {color:#ff8b00}CachingPersistentSessionStoreTest{color} (owner: Christo)
>  # {color:#ff8b00}CachingPersistentWindowStoreTest{color} (owner: Christo)
>  # {color:#ff8b00}ChangeLoggingKeyValueBytesStoreTest{color} (owner: Christo)
>  # {color:#ff8b00}ChangeLoggingTimestampedKeyValueBytesStoreTest{color} 
> (owner: Christo)
>  # {color:#ff8b00}CompositeReadOnlyWindowStoreTest{color} (owner: Christo)
>  # {color:#ff8b00}KeyValueStoreBuilderTest{color} (owner: Christo)
>  # {color:#ff8b00}RocksDBStoreTest{color} (owner: Christo)
>  # {color:#ff8b00}StreamThreadStateStoreProviderTest{color} (owner: Christo)
>  # {color:#ff8b00}TopologyTest{color} (owner: Christo)
>  # {color:#ff8b00}KTableSuppressProcessorTest{color} (owner: Christo)
>  # {color:#ff8b00}ChangeLoggingSessionBytesStoreTest{color} (owner: Christo)
>  # {color:#ff8b00}ChangeLoggingTimestampedWindowBytesStoreTest{color} (owner: 
> Christo)
>  # {color:#ff8b00}ChangeLoggingWindowBytesStoreTest{color} (owner: Christo)
>  # {color:#ff8b00}MeteredTimestampedWindowStoreTest{color} (owner: Christo)
>  # {color:#ff8b00}TaskManagerTest{color} (owner: Christo)
>  # InternalTopicManagerTest (owner: Christo)
>  # ProcessorContextImplTest (owner: Christo)
>  # WriteConsistencyVectorTest (owner: Christo)
>  # StreamsAssignmentScaleTest (owner: Christo)
>  # StreamsPartitionAssignorTest (owner: Christo)
>  # AssignmentTestUtils (owner: Christo)
>  # ProcessorStateManagerTest ({*}WIP{*} owner: Matthew)
>  # StandbyTaskTest ({*}WIP{*} owner: Matthew)
>  # StoreChangelogReaderTest ({*}WIP{*} owner: Matthew)
>  # StreamTaskTest ({*}WIP{*} owner: Matthew)
>  # StreamThreadTest ({*}WIP{*} owner: Matthew)
>  # StreamsMetricsImplTest
>  # TimeOrderedCachingPersistentWindowStoreTest
>  # TimeOrderedWindowStoreTest
> *The coverage report for the above tests after the change should be >= to 
> what the coverage is now.*



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] lucasbru opened a new pull request, #12749: KAFKA-14299: Fix busy polling with separate state restoration

2022-10-14 Thread GitBox


lucasbru opened a new pull request, #12749:
URL: https://github.com/apache/kafka/pull/12749

   StreamThread in state PARTITIONS_ASSIGNED was running in
   a busy loop until restoration is finished, stealing CPU
   cycles from restoration and other StreamThreads on the machine.
   
   Added unit test to make sure the StreamThread uses poll_time when
   state updater is enabled, and we are in state PARTITIONS_ASSIGNED.
   
   ### Committer Checklist (excluded from commit message)
   - [x] Verify design and implementation 
   - [x] Verify test coverage and CI build status
   - [x] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] ijuma opened a new pull request, #12750: MINOR: Inline "Running a Kafka broker in KRaft mode"

2022-10-14 Thread GitBox


ijuma opened a new pull request, #12750:
URL: https://github.com/apache/kafka/pull/12750

   Also moved KRaft mode above zk mode.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] ijuma commented on pull request #12688: MINOR: Delete config/kraft/README.md

2022-10-14 Thread GitBox


ijuma commented on PR #12688:
URL: https://github.com/apache/kafka/pull/12688#issuecomment-1279035666

   Since I broke this, I went ahead and submitted a PR 
https://github.com/apache/kafka/pull/12750
   
   Thanks for flagging this @conradlco .


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] ijuma commented on pull request #12657: MINOR: More verbose display granularity in gradle test logging

2022-10-14 Thread GitBox


ijuma commented on PR #12657:
URL: https://github.com/apache/kafka/pull/12657#issuecomment-1279038561

   Very nice, thanks for improving this!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] ijuma commented on a diff in pull request #12734: KAFKA-14255; Return an empty record instead of an OffsetOutOfRangeException when fetching from a follower without a leader epoch

2022-10-14 Thread GitBox


ijuma commented on code in PR #12734:
URL: https://github.com/apache/kafka/pull/12734#discussion_r995790517


##
core/src/main/scala/kafka/cluster/Partition.scala:
##
@@ -1339,6 +1341,26 @@ class Partition(val topicPartition: TopicPartition,
   }
 }
 
+// Fetching from a follower is only allowed from version 11 of the fetch 
request. Our intent
+// was to allow it assuming that those would also implement KIP-320 
(leader epoch). It turns
+// out that some clients use version 11 without KIP-320 and the broker 
allows this. The issue
+// is that we don't know whether the client fetches from the follower 
based on the order of
+// the leader or by mistake e.g. based on stale metadata. The latter means 
that a client could
+// end up on the follower with a offset that the follower does not have 
yet. Instead of returning
+// OffsetOutOfRangeException, we return an empty batch to the client with 
the expectation that
+// the client will retry and eventually refresh its metadata. Note that we 
only do this if the
+// client does not provide a leader epoch and use version 11.
+if (isFollower && !currentLeaderEpoch.isPresent && fetchOffset > 
initialLogEndOffset) {

Review Comment:
   Unclean leader election, fetch version > 11 and no KIP-320 implemented seems 
like it would be rare enough not to make things too complex for it. KIP-320 is 
being implemented for librdkafka as we speak and we should file an issue on 
Sarama's and kafkajs's issue tracker for them to implement it too. That's the 
only way to have truly sane behavior.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] ijuma opened a new pull request, #12751: MINOR: Include TLS version in transport layer debug log

2022-10-14 Thread GitBox


ijuma opened a new pull request, #12751:
URL: https://github.com/apache/kafka/pull/12751

   This was helpful when debugging an issue recently.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] ijuma commented on pull request #12148: MINOR: Remove unnecessary log4j-appender dependency and tweak explicit log4j dependency

2022-10-14 Thread GitBox


ijuma commented on PR #12148:
URL: https://github.com/apache/kafka/pull/12148#issuecomment-1279058422

   @omkreddy Rebased.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] ijuma commented on pull request #12666: KAFKA-14244: Add guard against accidental calls to halt JVM during testing

2022-10-14 Thread GitBox


ijuma commented on PR #12666:
URL: https://github.com/apache/kafka/pull/12666#issuecomment-1279069434

   I think there's no way around the junit extension, but I was wondering if we 
could use that with the `Exit` classes versus the security manager approach. 
Then we don't have to depend on the deprecated security manager. But if that's 
too difficult, then we can go with the security manager approach and adjust it 
for newer JDK versions when/if a new api is introduced to replace it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] ijuma commented on pull request #12621: CC-17475 Migrate connect system tests to KRaft

2022-10-14 Thread GitBox


ijuma commented on PR #12621:
URL: https://github.com/apache/kafka/pull/12621#issuecomment-1279070040

   cc @rondagostino 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] soarez opened a new pull request, #12752: KAFKA-14303 Fix batch.size=0 in BuiltInPartitioner

2022-10-14 Thread GitBox


soarez opened a new pull request, #12752:
URL: https://github.com/apache/kafka/pull/12752

   This fixes an bug which causes a call to producer.send(record) with a record 
without a key and configured with batch.size=0 never to return.
   
   Without specifying a key or a custom partitioner the new BuiltInPartitioner, 
as described in KIP-749 kicks in.
   
   BuiltInPartitioner seems to have been designed with the reasonable 
assumption that the batch size will never be lower than one.
   
   However, documentation for producer configuration states batch.size=0 as a 
valid value, and even recommends its use directly. [1]
   
   [1] 
clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java:87
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] wcarlson5 commented on a diff in pull request #12465: KAFKA-12950: Replace EasyMock and PowerMock with Mockito for KafkaStreamsTest

2022-10-14 Thread GitBox


wcarlson5 commented on code in PR #12465:
URL: https://github.com/apache/kafka/pull/12465#discussion_r995860323


##
streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java:
##
@@ -1096,7 +1096,7 @@ private Optional removeStreamThread(final long 
timeoutMs) throws Timeout
 // make a copy of threads to avoid holding lock
 for (final StreamThread streamThread : new 
ArrayList<>(threads)) {
 final boolean callingThreadIsNotCurrentStreamThread = 
!streamThread.getName().equals(Thread.currentThread().getName());
-if (streamThread.isAlive() && 
(callingThreadIsNotCurrentStreamThread || getNumLiveStreamThreads() == 1)) {
+if (callingThreadIsNotCurrentStreamThread || 
getNumLiveStreamThreads() == 1) {

Review Comment:
   Maybe, if the mock works then proabably



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] lucasbru commented on pull request #12749: KAFKA-14299: Fix busy polling with separate state restoration

2022-10-14 Thread GitBox


lucasbru commented on PR #12749:
URL: https://github.com/apache/kafka/pull/12749#issuecomment-1279148211

   @guozhangwang @cadonna Do you want to have a look?
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] soarez commented on pull request #12752: KAFKA-14303 Fix batch.size=0 in BuiltInPartitioner

2022-10-14 Thread GitBox


soarez commented on PR #12752:
URL: https://github.com/apache/kafka/pull/12752#issuecomment-1279151952

   @artemlivshits could you take a look at this one?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] jsancio commented on a diff in pull request #11023: KAFKA-13115 Update doSend doc about possible blocking

2022-10-14 Thread GitBox


jsancio commented on code in PR #11023:
URL: https://github.com/apache/kafka/pull/11023#discussion_r995913999


##
clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java:
##
@@ -778,9 +778,10 @@ public Future send(ProducerRecord 
record) {
 /**
  * Asynchronously send a record to a topic and invoke the provided 
callback when the send has been acknowledged.
  * 
- * The send is asynchronous and this method will return immediately once 
the record has been stored in the buffer of
+ * The send is asynchronous and this method will return immediately 
(except for getting the topic metadata) once the record has been stored in the 
buffer of

Review Comment:
   @IvanVas, it looks like the code does block waiting for metadata information 
on the cluster but only if it doesn't have the necessary information on the 
topic. In other works the producer only blocks on the metadata response, if 
this is the first record being sent to the cluster by this client for the given 
topic.
   
   How about encapsulating that in the comment? 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-14080) too many node disconnected message in kafka-clients

2022-10-14 Thread Philip Bourke (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17617865#comment-17617865
 ] 

Philip Bourke commented on KAFKA-14080:
---

same issue as https://issues.apache.org/jira/browse/KAFKA-13679

> too many node disconnected message in kafka-clients
> ---
>
> Key: KAFKA-14080
> URL: https://issues.apache.org/jira/browse/KAFKA-14080
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 3.1.1
>Reporter: rui
>Priority: Major
>
> when upgrade kafka-clients from 3.0.1 to 3.1.1, there are a lot of "Node 0 
> disconnected" message in networkclient  per day per listener(20-30k)
> think it is introduced by
> [https://github.com/a0x8o/kafka/commit/cf22405663ec7854bde7eaa3f22b9818c276563f]
> questions:
>  # is it normal with so many "Node X disconnected" message at INFO level? my 
> kafka server has any issue?
>  # it mentioned a back-off configuration to reduce the message, but does it 
> work?(still so many messages)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] cadonna commented on pull request #12743: KAFKA-14299: Fix incorrect pauses in separate state restoration

2022-10-14 Thread GitBox


cadonna commented on PR #12743:
URL: https://github.com/apache/kafka/pull/12743#issuecomment-1279253105

   Some failures regarding `SmokeTestDriverIntegrationTest` seem related. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] cadonna commented on a diff in pull request #12465: KAFKA-12950: Replace EasyMock and PowerMock with Mockito for KafkaStreamsTest

2022-10-14 Thread GitBox


cadonna commented on code in PR #12465:
URL: https://github.com/apache/kafka/pull/12465#discussion_r995964228


##
streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java:
##
@@ -1096,7 +1096,7 @@ private Optional removeStreamThread(final long 
timeoutMs) throws Timeout
 // make a copy of threads to avoid holding lock
 for (final StreamThread streamThread : new 
ArrayList<>(threads)) {
 final boolean callingThreadIsNotCurrentStreamThread = 
!streamThread.getName().equals(Thread.currentThread().getName());
-if (streamThread.isAlive() && 
(callingThreadIsNotCurrentStreamThread || getNumLiveStreamThreads() == 1)) {
+if (callingThreadIsNotCurrentStreamThread || 
getNumLiveStreamThreads() == 1) {

Review Comment:
   That is not a native method. A native method looks like the following in 
`java.lang.Thread`:
   ```
   public final native boolean isAlive();
   ``` 
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] cadonna commented on a diff in pull request #12465: KAFKA-12950: Replace EasyMock and PowerMock with Mockito for KafkaStreamsTest

2022-10-14 Thread GitBox


cadonna commented on code in PR #12465:
URL: https://github.com/apache/kafka/pull/12465#discussion_r995965165


##
streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java:
##
@@ -1096,7 +1096,7 @@ private Optional removeStreamThread(final long 
timeoutMs) throws Timeout
 // make a copy of threads to avoid holding lock
 for (final StreamThread streamThread : new 
ArrayList<>(threads)) {
 final boolean callingThreadIsNotCurrentStreamThread = 
!streamThread.getName().equals(Thread.currentThread().getName());
-if (streamThread.isAlive() && 
(callingThreadIsNotCurrentStreamThread || getNumLiveStreamThreads() == 1)) {
+if (callingThreadIsNotCurrentStreamThread || 
getNumLiveStreamThreads() == 1) {

Review Comment:
   Ah, I think you mean because it is a method on an enum.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (KAFKA-14304) ZooKeeper to KRaft Migration

2022-10-14 Thread David Arthur (Jira)
David Arthur created KAFKA-14304:


 Summary: ZooKeeper to KRaft Migration
 Key: KAFKA-14304
 URL: https://issues.apache.org/jira/browse/KAFKA-14304
 Project: Kafka
  Issue Type: New Feature
Reporter: David Arthur
Assignee: David Arthur
 Fix For: 3.4.0


Top-level JIRA for 
[KIP-866|https://cwiki.apache.org/confluence/display/KAFKA/KIP-866+ZooKeeper+to+KRaft+Migration]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] jsancio commented on a diff in pull request #12750: MINOR: Inline "Running a Kafka broker in KRaft mode" in README.md

2022-10-14 Thread GitBox


jsancio commented on code in PR #12750:
URL: https://github.com/apache/kafka/pull/12750#discussion_r996023706


##
README.md:
##
@@ -83,15 +83,17 @@ fail due to code changes. You can just run:
  
 ./gradlew processMessages processTestMessages
 
+### Running a Kafka broker in KRaft mode
+
+KAFKA_CLUSTER_ID="$(./bin/kafka-storage.sh random-uuid)"
+./bin/kafka-storage.sh format -t $KAFKA_CLUSTER_ID -c 
config/kraft/server.properties
+./bin/kafka-server-start.sh config/kraft/server.properties
+

Review Comment:
   Can we have a `###` heading with something like "Running a Kafka Broker" 
with 2 `` heading, one for KRaft-mode and one for ZK-mode?
   
   The `### Running a Kafka Broker` should mention that only one subsection 
should be followed. In other words the Kafka cluster should have one KRaft-mode 
server or ZK-mode server but not both.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] jsancio merged pull request #12707: MINOR: add kraft controller log level entry in log4j prop

2022-10-14 Thread GitBox


jsancio merged PR #12707:
URL: https://github.com/apache/kafka/pull/12707


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (KAFKA-14305) KRaft Metadata Transactions

2022-10-14 Thread David Arthur (Jira)
David Arthur created KAFKA-14305:


 Summary: KRaft Metadata Transactions
 Key: KAFKA-14305
 URL: https://issues.apache.org/jira/browse/KAFKA-14305
 Project: Kafka
  Issue Type: New Feature
Reporter: David Arthur
 Fix For: 3.4.0


[https://cwiki.apache.org/confluence/display/KAFKA/KIP-868+Metadata+Transactions]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] Scanteianu opened a new pull request, #12753: MINOR: Document Offset and Partition 0-indexing, fix typo

2022-10-14 Thread GitBox


Scanteianu opened a new pull request, #12753:
URL: https://github.com/apache/kafka/pull/12753

   *More detailed description of your change,
   if necessary. The PR title and PR message become
   the squashed commit message, so use a separate
   comment to ping reviewers.*
   
   Add comments to clarify that both offsets and partitions are 0-indexed, and 
fix a minor typo
   
   *Summary of testing strategy (including rationale)
   for the feature or bug fix. Unit and/or integration
   tests are expected for any behaviour change and
   system tests should be considered for larger changes.*
   no testing required, just comments
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (KAFKA-14298) Getting null pointer exception

2022-10-14 Thread Greg Harris (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Harris updated KAFKA-14298:

Description: 
Getting null pointer exception.

 
{noformat}
java.lang.NullPointerException 
at 
org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:995) 
at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:925) 
at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:801) 
at 
com.wellsfargo.dci.opensource.kafka.KafkaUtil$$anonfun$10$$anonfun$11.apply(KafkaUtil.scala:363)
 
at 
com.wellsfargo.dci.opensource.kafka.KafkaUtil$$anonfun$10$$anonfun$11.apply(KafkaUtil.scala:361){noformat}
 

  was:
Getting null pointer exception.

java.lang.NullPointerException at 
org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:995) 
at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:925) 
at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:801) 
at 
com.wellsfargo.dci.opensource.kafka.KafkaUtil$$anonfun$10$$anonfun$11.apply(KafkaUtil.scala:363)
 at 
com.wellsfargo.dci.opensource.kafka.KafkaUtil$$anonfun$10$$anonfun$11.apply(KafkaUtil.scala:361)


> Getting null pointer exception
> --
>
> Key: KAFKA-14298
> URL: https://issues.apache.org/jira/browse/KAFKA-14298
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Ramakrishna
>Priority: Major
>
> Getting null pointer exception.
>  
> {noformat}
> java.lang.NullPointerException 
> at 
> org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:995)
>  
> at 
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:925) 
> at 
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:801) 
> at 
> com.wellsfargo.dci.opensource.kafka.KafkaUtil$$anonfun$10$$anonfun$11.apply(KafkaUtil.scala:363)
>  
> at 
> com.wellsfargo.dci.opensource.kafka.KafkaUtil$$anonfun$10$$anonfun$11.apply(KafkaUtil.scala:361){noformat}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-14298) Getting null pointer exception

2022-10-14 Thread Greg Harris (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17617967#comment-17617967
 ] 

Greg Harris commented on KAFKA-14298:
-

[~ramkychowdary0560] I'm not familiar with `com.wellsfargo.dci.*`, I assume 
that is an application that you're working with that makes use of the Kafka 
Producer.

Taking a quick look at the implementation of doSend, it appears that it 
[implicitly requires the record being sent to be 
non-null|https://github.com/apache/kafka/blob/78b4ba7d85a6739ff308ef8d7af678c63ac06ef6/clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java#L993-L995].
You may be passing in a null record, which is not being handled with a more 
descriptive error message.
In order to avoid this error, it may be necessary to add a null check of your 
own before calling send().

Also, as an aside, you may consider enabling more detailed NPE messages to aid 
with your debugging: _-XX:+ShowCodeDetailsInExceptionMessages_

> Getting null pointer exception
> --
>
> Key: KAFKA-14298
> URL: https://issues.apache.org/jira/browse/KAFKA-14298
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Ramakrishna
>Priority: Major
>
> Getting null pointer exception.
>  
> {noformat}
> java.lang.NullPointerException 
> at 
> org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:995)
>  
> at 
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:925) 
> at 
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:801) 
> at 
> com.wellsfargo.dci.opensource.kafka.KafkaUtil$$anonfun$10$$anonfun$11.apply(KafkaUtil.scala:363)
>  
> at 
> com.wellsfargo.dci.opensource.kafka.KafkaUtil$$anonfun$10$$anonfun$11.apply(KafkaUtil.scala:361){noformat}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-14298) Getting null pointer exception

2022-10-14 Thread Greg Harris (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Harris updated KAFKA-14298:

Component/s: clients
 (was: KafkaConnect)

> Getting null pointer exception
> --
>
> Key: KAFKA-14298
> URL: https://issues.apache.org/jira/browse/KAFKA-14298
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Ramakrishna
>Priority: Major
>
> Getting null pointer exception.
>  
> {noformat}
> java.lang.NullPointerException 
> at 
> org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:995)
>  
> at 
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:925) 
> at 
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:801) 
> at 
> com.wellsfargo.dci.opensource.kafka.KafkaUtil$$anonfun$10$$anonfun$11.apply(KafkaUtil.scala:363)
>  
> at 
> com.wellsfargo.dci.opensource.kafka.KafkaUtil$$anonfun$10$$anonfun$11.apply(KafkaUtil.scala:361){noformat}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] soarez commented on pull request #12752: KAFKA-14303 Fix batch.size=0 in BuiltInPartitioner

2022-10-14 Thread GitBox


soarez commented on PR #12752:
URL: https://github.com/apache/kafka/pull/12752#issuecomment-1279487116

   Failing tests are unrelated:
   
   ```
   :clients:unitTest CooperativeStickyAssignorTest > 
testLargeAssignmentAndGroupWithNonEqualSubscription()
   :clients:unitTest StickyAssignorTest > 
testLargeAssignmentAndGroupWithNonEqualSubscription()
   :core:integrationTest TransactionsExpirationTest > 
testTransactionAfterProducerIdExpires(String) > 
kafka.api.TransactionsExpirationTest.testTransactionAfterProducerIdExpires(String)[2]
   :core:integrationTest TransactionsTest > testFailureToFenceEpoch(String) > 
kafka.api.TransactionsTest.testFailureToFenceEpoch(String)[2]
   :core:integrationTest TransactionsTest > 
testSendOffsetsToTransactionTimeout(String) > 
kafka.api.TransactionsTest.testSendOffsetsToTransactionTimeout(String)[2]
   :streams:integrationTest NamedTopologyIntegrationTest > 
shouldAllowRemovingAndAddingNamedTopologyToRunningApplicationWithMultipleNodesAndResetsOffsets
   :trogdor:integrationTest CoordinatorTest > 
testTaskRequestWithOldStartMsGetsUpdated()
   
   ```
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] soarez commented on pull request #12752: KAFKA-14303 Fix batch.size=0 in BuiltInPartitioner

2022-10-14 Thread GitBox


soarez commented on PR #12752:
URL: https://github.com/apache/kafka/pull/12752#issuecomment-1279487451

   @showuon @ijuma should this one also be backported to 3.3?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] ijuma commented on a diff in pull request #12752: KAFKA-14303 Fix batch.size=0 in BuiltInPartitioner

2022-10-14 Thread GitBox


ijuma commented on code in PR #12752:
URL: https://github.com/apache/kafka/pull/12752#discussion_r996190731


##
clients/src/main/java/org/apache/kafka/clients/producer/internals/BuiltInPartitioner.java:
##
@@ -56,6 +56,9 @@ public class BuiltInPartitioner {
 public BuiltInPartitioner(LogContext logContext, String topic, int 
stickyBatchSize) {
 this.log = logContext.logger(BuiltInPartitioner.class);
 this.topic = topic;
+if (stickyBatchSize < 1) {
+throw new IllegalArgumentException("stickyBatchSize must at least 
1");

Review Comment:
   You should include the value of `stickyBatchSize` in the message.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] ijuma commented on a diff in pull request #12752: KAFKA-14303 Fix batch.size=0 in BuiltInPartitioner

2022-10-14 Thread GitBox


ijuma commented on code in PR #12752:
URL: https://github.com/apache/kafka/pull/12752#discussion_r996192499


##
clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java:
##
@@ -127,7 +127,9 @@ public RecordAccumulator(LogContext logContext,
 this.closed = false;
 this.flushesInProgress = new AtomicInteger(0);
 this.appendsInProgress = new AtomicInteger(0);
-this.batchSize = batchSize;
+// As per Kafka producer configuration documentation batch.size may be 
set to 0 to explicitly disable
+// batching which in practice actually means using a batch size of 1.
+this.batchSize = Math.max(1, batchSize);

Review Comment:
   Would it make sense to do this in `KafkaProducer` when the config is 
retrieved?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] guozhangwang merged pull request #12744: Kafka Streams Threading P2: Skeleton TaskExecutor Impl

2022-10-14 Thread GitBox


guozhangwang merged PR #12744:
URL: https://github.com/apache/kafka/pull/12744


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] guozhangwang opened a new pull request, #12754: Kafka Streams Threading P3: TaskManager Impl

2022-10-14 Thread GitBox


guozhangwang opened a new pull request, #12754:
URL: https://github.com/apache/kafka/pull/12754

   0. Add name to task executors.
   1. DefaultTaskManager implementation, for interacting with the TaskExecutors 
and support add/remove/lock APIs.
   2. Related unit tests.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] guozhangwang merged pull request #12754: Kafka Streams Threading P3: TaskManager Impl

2022-10-14 Thread GitBox


guozhangwang merged PR #12754:
URL: https://github.com/apache/kafka/pull/12754


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org