This is an automated email from the ASF dual-hosted git repository.

srowen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new 5252d8b  [SPARK-27046][DSTREAMS] Remove SPARK-19185 related references 
from documentation
5252d8b is described below

commit 5252d8b9872cbf200651b0bb7b8c6edd649ebb58
Author: Gabor Somogyi <gabor.g.somo...@gmail.com>
AuthorDate: Mon Mar 4 09:31:46 2019 -0600

    [SPARK-27046][DSTREAMS] Remove SPARK-19185 related references from 
documentation
    
    ## What changes were proposed in this pull request?
    
    SPARK-19185 is resolved so the reference can be removed from the 
documentation.
    
    ## How was this patch tested?
    
    cd docs/
    SKIP_API=1 jekyll build
    Manual webpage check.
    
    Closes #23959 from gaborgsomogyi/SPARK-27046.
    
    Authored-by: Gabor Somogyi <gabor.g.somo...@gmail.com>
    Signed-off-by: Sean Owen <sean.o...@databricks.com>
---
 docs/streaming-kafka-0-10-integration.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/streaming-kafka-0-10-integration.md 
b/docs/streaming-kafka-0-10-integration.md
index c78459c..975adacca 100644
--- a/docs/streaming-kafka-0-10-integration.md
+++ b/docs/streaming-kafka-0-10-integration.md
@@ -96,7 +96,7 @@ In most cases, you should use 
`LocationStrategies.PreferConsistent` as shown abo
 
 The cache for consumers has a default maximum size of 64.  If you expect to be 
handling more than (64 * number of executors) Kafka partitions, you can change 
this setting via `spark.streaming.kafka.consumer.cache.maxCapacity`.
 
-If you would like to disable the caching for Kafka consumers, you can set 
`spark.streaming.kafka.consumer.cache.enabled` to `false`. Disabling the cache 
may be needed to workaround the problem described in SPARK-19185. This property 
may be removed in later versions of Spark, once SPARK-19185 is resolved.
+If you would like to disable the caching for Kafka consumers, you can set 
`spark.streaming.kafka.consumer.cache.enabled` to `false`.
 
 The cache is keyed by topicpartition and group.id, so use a **separate** 
`group.id` for each call to `createDirectStream`.
 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to