Github user gatorsmile commented on the issue:

    https://github.com/apache/spark/pull/21537
  
    To Spark users, introducing AnalysisBarrier is a disaster. However, to the 
developers of Spark internal, this is just a bug. If you served the customers 
who are heavily using Spark, you will understand what I am talking about. It is 
even hard to debug when the Spark jobs are very complex. 
    
    Normally, we never commit/merge any PR that is useless, especially when the 
PR changes are not tiny. Reverting this PRs are also very painful. That is why 
Reynold took a few days to finish it. It is not a fun job for him to rewrite it.
    
    Based on the current work, I can expect there are hundreds of PRs that will 
be submitted for changing the codegen templates and polishing the current code. 
The reason why we did not merge this PR is that we are doubting this is the 
right thing to do. @rednaxelafx 
    
    I am not saying @viirya and @mgaido91 did a bad job to submit many PRs to 
improve the existing one. However, we need to think of the fundamental problems 
we are solving in the codegen. Instead of reinventing a compiler, how about 
letting the compiler internal expert (in our community, we have @kiszk) to lead 
the effort and offer a design for this. Coding and designing are different 
issues. If possible, we need to find the best person to drive it. If @viirya 
and @mgaido91 think they are familiar with compiler internal, I am also glad to 
see the designs.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to