Github user DaveDeCaprio commented on a diff in the pull request:

    https://github.com/apache/spark/pull/23076#discussion_r234444348
  
    --- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/trees/TreeNode.scala 
---
    @@ -701,3 +725,23 @@ abstract class TreeNode[BaseType <: 
TreeNode[BaseType]] extends Product {
         case _ => false
       }
     }
    +
    +object TreeNode {
    +  /**
    +    * Query plans for large, deeply nested plans can get extremely large. 
To limit the impact,
    +    * we add a parameter that limits the logging to the top layers if the 
tree gets too deep.
    +    * This can be overridden by setting the 
'spark.debug.maxToStringTreeDepth' conf in SparkEnv.
    +    */
    +  val DEFAULT_MAX_TO_STRING_TREE_DEPTH = 15
    +
    +  def maxTreeToStringDepth: Int = {
    +    if (SparkEnv.get != null) {
    +      SparkEnv.get.conf.getInt("spark.debug.maxToStringTreeDepth", 
DEFAULT_MAX_TO_STRING_TREE_DEPTH)
    +    } else {
    +      DEFAULT_MAX_TO_STRING_TREE_DEPTH
    +    }
    +  }
    +
    +  /** Whether we have warned about plan string truncation yet. */
    +  private val treeDepthWarningPrinted = new AtomicBoolean(false)
    +}
    --- End diff --
    
    I wasn't sure where to put this code, so made a TreeNode companion object.  
If there is a better place let me know.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to