Github user daroo commented on the issue:

    https://github.com/apache/spark/pull/19789
  
    I don't know what are the plans for the structured streaming which @zsxwing 
has in mind, but before I created the PR I actually had seen how this problem 
is currently solved in kafka-0-10-sql module and I didn't like it. My main 
concern is that in certain cases the cache size my grow well beyond configured 
spark.sql.kafkaConsumerCache.capacity. That’s why I’ve chosen to do it in a 
bit different way.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to