dengziming commented on PR #38715: URL: https://github.com/apache/spark/pull/38715#issuecomment-1323037097
These failures comes from [apache/kafka#12049](https://github.com/apache/kafka/pull/12049) and is described here: https://kafka.apache.org/documentation/#upgrade_33_notable The new default partitioner keeps track of how many bytes are produced per-partition and once the amount exceeds batch.size, switches to the next partition. In spark kafka tests, this will result in records being sent to one partition in some tests. One simplest solution is add `props.put("partitioner.class",classOf[org.apache.kafka.clients.producer.internals.DefaultPartitioner].getName)` in `KafkaTestUtils.producerConfiguration`, or we can implement our own partitioner, or set a small`batch.size` config. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org