sasakitoa commented on a change in pull request #9081:
URL: https://github.com/apache/kafka/pull/9081#discussion_r461264833



##########
File path: core/src/test/scala/integration/kafka/api/TransactionsTest.scala
##########
@@ -406,6 +406,26 @@ class TransactionsTest extends KafkaServerTestHarness {
     TestUtils.waitUntilTrue(() => 
offsetAndMetadata.equals(consumer.committed(Set(tp).asJava).get(tp)), "cannot 
read committed offset")
   }
 
+  @Test(expected = classOf[TimeoutException])
+  def testSendOffsetsToTransactionTimeout(): Unit = {
+    val producer = createTransactionalProducer("transactionProducer", 
maxBlockMs = 1000)
+    producer.initTransactions()
+    producer.beginTransaction()
+    producer.send(new ProducerRecord[Array[Byte], Array[Byte]](topic1, 
"foo".getBytes, "bar".getBytes))
+
+    for (i <- 0 until servers.size)
+      killBroker(i)
+
+    val offsets = new mutable.HashMap[TopicPartition, 
OffsetAndMetadata]().asJava
+    offsets.put(new TopicPartition(topic1, 0), new OffsetAndMetadata(0))
+    try {
+      producer.sendOffsetsToTransaction(offsets, "test-group")

Review comment:
       Replaced from `mutable.HashMap` to Map

##########
File path: 
clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java
##########
@@ -687,7 +687,7 @@ public void sendOffsetsToTransaction(Map<TopicPartition, 
OffsetAndMetadata> offs
         throwIfProducerClosed();
         TransactionalRequestResult result = 
transactionManager.sendOffsetsToTransaction(offsets, groupMetadata);
         sender.wakeup();
-        result.await();
+        result.await(maxBlockTimeMs, TimeUnit.MILLISECONDS);

Review comment:
       I wrote some description related to TimeoutException and 
InterruptedException

##########
File path: core/src/test/scala/integration/kafka/api/TransactionsTest.scala
##########
@@ -406,6 +406,26 @@ class TransactionsTest extends KafkaServerTestHarness {
     TestUtils.waitUntilTrue(() => 
offsetAndMetadata.equals(consumer.committed(Set(tp).asJava).get(tp)), "cannot 
read committed offset")
   }
 
+  @Test(expected = classOf[TimeoutException])
+  def testSendOffsetsToTransactionTimeout(): Unit = {
+    val producer = createTransactionalProducer("transactionProducer", 
maxBlockMs = 1000)
+    producer.initTransactions()
+    producer.beginTransaction()
+    producer.send(new ProducerRecord[Array[Byte], Array[Byte]](topic1, 
"foo".getBytes, "bar".getBytes))
+
+    for (i <- 0 until servers.size)
+      killBroker(i)
+
+    val offsets = new mutable.HashMap[TopicPartition, 
OffsetAndMetadata]().asJava
+    offsets.put(new TopicPartition(topic1, 0), new OffsetAndMetadata(0))
+    try {
+      producer.sendOffsetsToTransaction(offsets, "test-group")

Review comment:
       Replaced from `mutable.HashMap` to `Map`

##########
File path: core/src/test/scala/integration/kafka/api/TransactionsTest.scala
##########
@@ -406,6 +406,26 @@ class TransactionsTest extends KafkaServerTestHarness {
     TestUtils.waitUntilTrue(() => 
offsetAndMetadata.equals(consumer.committed(Set(tp).asJava).get(tp)), "cannot 
read committed offset")
   }
 
+  @Test(expected = classOf[TimeoutException])
+  def testSendOffsetsToTransactionTimeout(): Unit = {
+    val producer = createTransactionalProducer("transactionProducer", 
maxBlockMs = 1000)
+    producer.initTransactions()
+    producer.beginTransaction()
+    producer.send(new ProducerRecord[Array[Byte], Array[Byte]](topic1, 
"foo".getBytes, "bar".getBytes))
+
+    for (i <- 0 until servers.size)

Review comment:
       Modified to use from `size` to `indices`, thanks.

##########
File path: core/src/test/scala/integration/kafka/api/TransactionsTest.scala
##########
@@ -406,6 +406,26 @@ class TransactionsTest extends KafkaServerTestHarness {
     TestUtils.waitUntilTrue(() => 
offsetAndMetadata.equals(consumer.committed(Set(tp).asJava).get(tp)), "cannot 
read committed offset")
   }
 
+  @Test(expected = classOf[TimeoutException])
+  def testSendOffsetsToTransactionTimeout(): Unit = {
+    val producer = createTransactionalProducer("transactionProducer", 
maxBlockMs = 1000)
+    producer.initTransactions()
+    producer.beginTransaction()
+    producer.send(new ProducerRecord[Array[Byte], Array[Byte]](topic1, 
"foo".getBytes, "bar".getBytes))
+
+    for (i <- 0 until servers.size)
+      killBroker(i)
+
+    try {
+      producer.sendOffsetsToTransaction(Map(

Review comment:
       I added some timeout tests for initTransaction, commitTransction, 
abortTransaction using same base method.
   Is this implementation correct what you intended?

##########
File path: core/src/test/scala/integration/kafka/api/TransactionsTest.scala
##########
@@ -406,6 +406,26 @@ class TransactionsTest extends KafkaServerTestHarness {
     TestUtils.waitUntilTrue(() => 
offsetAndMetadata.equals(consumer.committed(Set(tp).asJava).get(tp)), "cannot 
read committed offset")
   }
 
+  @Test(expected = classOf[TimeoutException])
+  def testSendOffsetsToTransactionTimeout(): Unit = {
+    val producer = createTransactionalProducer("transactionProducer", 
maxBlockMs = 1000)
+    producer.initTransactions()
+    producer.beginTransaction()
+    producer.send(new ProducerRecord[Array[Byte], Array[Byte]](topic1, 
"foo".getBytes, "bar".getBytes))
+
+    for (i <- 0 until servers.size)

Review comment:
       Modified from `size` to `indices`, thanks.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to