Github user sujith71955 commented on the issue:

    https://github.com/apache/spark/pull/16677
  
    @viirya Are we also looking to optimize CollectLimitExec part? I saw in 
SparkPlan we have an executeTake() method which basically interpolate the 
number of partitions and processes the limit query. if driver analyze some 
statistics about data then i think even this algorithm we can optimize right.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to