Github user wzhfy commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19295#discussion_r140623955
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkOptimizer.scala ---
    @@ -28,12 +28,18 @@ class SparkOptimizer(
         experimentalMethods: ExperimentalMethods)
       extends Optimizer(catalog) {
     
    -  override def batches: Seq[Batch] = (preOptimizationBatches ++ 
super.batches :+
    +  val experimentalPreOptimizations: Seq[Batch] = Seq(Batch(
    +    "User Provided Pre Optimizers", fixedPoint, 
experimentalMethods.extraPreOptimizations: _*))
    +
    +  val experimentalPostOptimizations: Batch = Batch(
    +    "User Provided Post Optimizers", fixedPoint, 
experimentalMethods.extraOptimizations: _*)
    +
    +  override def batches: Seq[Batch] = experimentalPreOptimizations ++
    +    (preOptimizationBatches ++ super.batches :+
    --- End diff --
    
    OK, I see. Then could you add the use case to PR description? like: 
    ```
    after this PR, we can add both pre/post optimization rules at runtime as 
follows:
    ...
    ```


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to