maropu commented on a change in pull request #26970: [SPARK-28825][SQL][DOC] 
Documentation for Explain Command
URL: https://github.com/apache/spark/pull/26970#discussion_r361028626
 
 

 ##########
 File path: docs/sql-ref-syntax-qry-explain.md
 ##########
 @@ -19,4 +19,126 @@ license: |
   limitations under the License.
 ---
 
-**This page is under construction**
+### Description
+
+The `EXPLAIN` statement provides the execution plan for the statement. 
+By default, `EXPLAIN` provides information about the physical plan.
+`EXPLAIN` does not support `DESCRIBE TABLE` statement.
+
+
+### Syntax
+{% highlight sql %}
+EXPLAIN [EXTENDED | CODEGEN | COST | FORMATTED] statement
+{% endhighlight %}
+
+### Parameters
+
+<dl>
+  <dt><code><em>EXTENDED</em></code></dt>
+  <dd>Generates Parsed Logical Plan, Analyzed Logical Plan, Optimized Logical 
Plan and Physical Plan.
+   Parsed Logical plan is a unresolved plan that extracted from the query.
+   Analyzed logical plans transforms which translates UnresolvedAttribute and 
UnresolvedRelation into fully typed objects.
+   The optimized logical plan transforms through a set of optimization rules, 
resulting in the Physical plan.
+  </dd>
+</dl> 
+
+<dl>
+  <dt><code><em>CODEGEN</em></code></dt>
+  <dd>Generates code for the statement, if any and a Physical Plan.</dd>
+</dl>
+
+<dl>
+  <dt><code><em>COST</em></code></dt>
+  <dd>If plan node statistics are available, generates a logical plan and the 
statistics.</dd>
+</dl>
+
+<dl>
+  <dt><code><em>FORMATTED</em></code></dt>
+  <dd>Generates two sections: a physical plan outline and node details.</dd>
+</dl>
+
+### Examples
+{% highlight sql %}
+
+-- Using Extended
+
+EXPLAIN EXTENDED select * from emp;
++----------------------------------------------------+
+|                        plan                        |
++----------------------------------------------------+
+| == Parsed Logical Plan ==
+'Project [*]
++- 'UnresolvedRelation [emp]
+
+== Analyzed Logical Plan ==
+id: int
+Project [id#0]
++- SubqueryAlias `default`.`emp`
+   +- Relation[id#0] parquet
+
+== Optimized Logical Plan ==
+Relation[id#0] parquet
+
+== Physical Plan ==
+*(1) ColumnarToRow
++- FileScan parquet default.emp[id#0] Batched: true, DataFilters: [], Format: 
Parquet, Location: 
InMemoryFileIndex[file:/home/root1/Spark/spark/spark-warehouse/emp], 
PartitionFilters: [], PushedFilters: [], ReadSchema: struct<id:int>
+ |
++----------------------------------------------------+
+
+--Default Output
+
+EXPLAIN select * from emp;
++----------------------------------------------------+
+|                        plan                        |
++----------------------------------------------------+
+| == Physical Plan ==
+*(1) ColumnarToRow
++- FileScan parquet default.emp[id#0] Batched: true, DataFilters: [], Format: 
Parquet, Location: 
InMemoryFileIndex[file:/home/root1/Spark/spark/spark-warehouse/emp], 
PartitionFilters: [], PushedFilters: [], ReadSchema: struct<id:int>
+
+ |
++----------------------------------------------------+
+
+-- Using Cost 
+EXPLAIN COST select * from emp;
++----------------------------------------------------+
+|                        plan                        |
++----------------------------------------------------+
+| == Optimized Logical Plan ==
+Relation[id#5] parquet, Statistics(sizeInBytes=421.0 B)
+
+== Physical Plan ==
+*(1) ColumnarToRow
++- FileScan parquet default.emp[id#5] Batched: true, DataFilters: [], Format: 
Parquet, Location: 
InMemoryFileIndex[file:/home/root1/Spark/spark/spark-warehouse/emp], 
PartitionFilters: [], PushedFilters: [], ReadSchema: struct<id:int>
+
+ |
++----------------------------------------------------+
+
+--Using Formatted
+
+EXPLAIN FORMATTED select * from emp;
++----------------------------------------------------+
+|                        plan                        |
++----------------------------------------------------+
+| == Physical Plan ==
+* ColumnarToRow (2)
++- Scan parquet default.emp (1)
+
+
+(1) Scan parquet default.emp 
+Output: [id#5]
+Batched: true
+Location: InMemoryFileIndex [file:/home/root1/Spark/spark/spark-warehouse/emp]
+ReadSchema: struct&lt;id:int&gt;
+     
+(2) ColumnarToRow [codegen id : 1]
+Input: [id#5]
+     
+ |
++----------------------------------------------------+
+
+
+-- Using CODEGEN
+
+EXPLAIN CODEGEN select * from emp;
 
 Review comment:
   I think we don't need this CODEGEN example.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to