dongjoon-hyun commented on a change in pull request #25219: 
[SPARK-28464][Doc][SS] Document Kafka source minPartitions option
URL: https://github.com/apache/spark/pull/25219#discussion_r305607418
 
 

 ##########
 File path: docs/structured-streaming-kafka-integration.md
 ##########
 @@ -388,6 +388,19 @@ The following configurations are optional:
   <td>streaming and batch</td>
   <td>Rate limit on maximum number of offsets processed per trigger interval. 
The specified total number of offsets will be proportionally split across 
topicPartitions of different volume.</td>
 </tr>
+<tr>
+  <td>minPartitions</td>
+  <td>int</td>
+  <td></td>
+  <td>streaming and batch</td>
+  <td>Minimum number of partitions to read from Kafka.
+  You can configure Spark to use an arbitrary minimum of partitions to read 
from Kafka using the minPartitions option.
+  Normally Spark has a 1-1 mapping of Kafka TopicPartitions to Spark 
partitions consuming from Kafka.
+  If you set the minPartitions option to a value greater than your Kafka 
TopicPartitions,
+  Spark will divvy up large Kafka partitions to smaller pieces.
+  This option can be set at times of peak loads, data skew, and as your stream 
is falling behind to increase processing rate.
+  It comes at a cost of initializing Kafka consumers at each trigger, which 
may impact performance if you use SSL when connecting to Kafka.</td>
 
 Review comment:
   Let's remove line 401~402, too.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to