This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
     new a36140c  [SPARK-32075][DOCS] Fix a few issues in parameters table
a36140c is described below

commit a36140c3c300beaf50d19381ac72e2524f888e53
Author: sidedoorleftroad <sidedoorleftr...@163.com>
AuthorDate: Wed Jun 24 13:39:55 2020 +0900

    [SPARK-32075][DOCS] Fix a few issues in parameters table
    
    ### What changes were proposed in this pull request?
    
    Fix a few issues in parameters table in 
structured-streaming-kafka-integration doc.
    
    ### Why are the changes needed?
    
    Make the title of the table consistent with the data.
    
    ### Does this PR introduce _any_ user-facing change?
    
    Yes.
    
    Before:
    
![image](https://user-images.githubusercontent.com/67275816/85414316-8475e300-b59e-11ea-84ec-fa78ecc980b3.png)
    After:
    
![image](https://user-images.githubusercontent.com/67275816/85414562-d61e6d80-b59e-11ea-9fe6-247e0ad4d9ee.png)
    
    Before:
    
![image](https://user-images.githubusercontent.com/67275816/85414467-b8510880-b59e-11ea-92a0-7205542fe28b.png)
    After:
    
![image](https://user-images.githubusercontent.com/67275816/85414589-de76a880-b59e-11ea-91f2-5073eaf3444b.png)
    
    Before:
    
![image](https://user-images.githubusercontent.com/67275816/85414502-c69f2480-b59e-11ea-837f-1201f10a56b6.png)
    After:
    
![image](https://user-images.githubusercontent.com/67275816/85414615-e9313d80-b59e-11ea-9b1a-fc11da0b6bc5.png)
    
    ### How was this patch tested?
    
    Manually build and check.
    
    Closes #28910 from sidedoorleftroad/SPARK-32075.
    
    Authored-by: sidedoorleftroad <sidedoorleftr...@163.com>
    Signed-off-by: HyukjinKwon <gurwls...@apache.org>
    (cherry picked from commit 986fa01747db4b52bb8ca1165e759ca2d46d26ff)
    Signed-off-by: HyukjinKwon <gurwls...@apache.org>
---
 docs/structured-streaming-kafka-integration.md | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/docs/structured-streaming-kafka-integration.md 
b/docs/structured-streaming-kafka-integration.md
index 016faa7..8dc2a73 100644
--- a/docs/structured-streaming-kafka-integration.md
+++ b/docs/structured-streaming-kafka-integration.md
@@ -528,28 +528,28 @@ The following properties are available to configure the 
consumer pool:
 <tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since 
Version</th></tr>
 <tr>
   <td>spark.kafka.consumer.cache.capacity</td>
-  <td>The maximum number of consumers cached. Please note that it's a soft 
limit.</td>
   <td>64</td>
+  <td>The maximum number of consumers cached. Please note that it's a soft 
limit.</td>
   <td>3.0.0</td>
 </tr>
 <tr>
   <td>spark.kafka.consumer.cache.timeout</td>
-  <td>The minimum amount of time a consumer may sit idle in the pool before it 
is eligible for eviction by the evictor.</td>
   <td>5m (5 minutes)</td>
+  <td>The minimum amount of time a consumer may sit idle in the pool before it 
is eligible for eviction by the evictor.</td>
   <td>3.0.0</td>
 </tr>
 <tr>
   <td>spark.kafka.consumer.cache.evictorThreadRunInterval</td>
-  <td>The interval of time between runs of the idle evictor thread for 
consumer pool. When non-positive, no idle evictor thread will be run.</td>
   <td>1m (1 minute)</td>
+  <td>The interval of time between runs of the idle evictor thread for 
consumer pool. When non-positive, no idle evictor thread will be run.</td>
   <td>3.0.0</td>
 </tr>
 <tr>
   <td>spark.kafka.consumer.cache.jmx.enable</td>
+  <td>false</td>
   <td>Enable or disable JMX for pools created with this configuration 
instance. Statistics of the pool are available via JMX instance.
   The prefix of JMX name is set to 
"kafka010-cached-simple-kafka-consumer-pool".
   </td>
-  <td>false</td>
   <td>3.0.0</td>
 </tr>
 </table>
@@ -578,14 +578,14 @@ The following properties are available to configure the 
fetched data pool:
 <tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since 
Version</th></tr>
 <tr>
   <td>spark.kafka.consumer.fetchedData.cache.timeout</td>
-  <td>The minimum amount of time a fetched data may sit idle in the pool 
before it is eligible for eviction by the evictor.</td>
   <td>5m (5 minutes)</td>
+  <td>The minimum amount of time a fetched data may sit idle in the pool 
before it is eligible for eviction by the evictor.</td>
   <td>3.0.0</td>
 </tr>
 <tr>
   <td>spark.kafka.consumer.fetchedData.cache.evictorThreadRunInterval</td>
-  <td>The interval of time between runs of the idle evictor thread for fetched 
data pool. When non-positive, no idle evictor thread will be run.</td>
   <td>1m (1 minute)</td>
+  <td>The interval of time between runs of the idle evictor thread for fetched 
data pool. When non-positive, no idle evictor thread will be run.</td>
   <td>3.0.0</td>
 </tr>
 </table>
@@ -825,14 +825,14 @@ The following properties are available to configure the 
producer pool:
 <tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since 
Version</th></tr>
 <tr>
   <td>spark.kafka.producer.cache.timeout</td>
-  <td>The minimum amount of time a producer may sit idle in the pool before it 
is eligible for eviction by the evictor.</td>
   <td>10m (10 minutes)</td>
+  <td>The minimum amount of time a producer may sit idle in the pool before it 
is eligible for eviction by the evictor.</td>
   <td>2.2.1</td>
 </tr>
 <tr>
   <td>spark.kafka.producer.cache.evictorThreadRunInterval</td>
-  <td>The interval of time between runs of the idle evictor thread for 
producer pool. When non-positive, no idle evictor thread will be run.</td>
   <td>1m (1 minute)</td>
+  <td>The interval of time between runs of the idle evictor thread for 
producer pool. When non-positive, no idle evictor thread will be run.</td>
   <td>3.0.0</td>
 </tr>
 </table>


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to