Github user gaborgsomogyi commented on the issue:

    https://github.com/apache/spark/pull/20997
  
    @koeninger 
    
    > I don't see an upper bound on the number of consumers per key, nor a way 
of reaping idle consumers. If the SQL equivalent code is likely to be modified 
to use pooling of some kind, seems better to make a consistent decision.
    
    When do you think the decision will happen?



---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to