Repository: spark
Updated Branches:
  refs/heads/master d3af6731f -> bde1d6a61


[SPARK-16294][SQL] Labelling support for the include_example Jekyll plugin

## What changes were proposed in this pull request?

This PR adds labelling support for the `include_example` Jekyll plugin, so that 
we may split a single source file into multiple line blocks with different 
labels, and include them in multiple code snippets in the generated HTML page.

## How was this patch tested?

Manually tested.

<img width="923" alt="screenshot at jun 29 19-53-21" 
src="https://cloud.githubusercontent.com/assets/230655/16451099/66a76db2-3e33-11e6-84fb-63104c2f0688.png";>

Author: Cheng Lian <l...@databricks.com>

Closes #13972 from liancheng/include-example-with-labels.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/bde1d6a6
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/bde1d6a6
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/bde1d6a6

Branch: refs/heads/master
Commit: bde1d6a61593aeb62370f526542cead94919b0c0
Parents: d3af673
Author: Cheng Lian <l...@databricks.com>
Authored: Wed Jun 29 22:50:53 2016 -0700
Committer: Xiangrui Meng <m...@databricks.com>
Committed: Wed Jun 29 22:50:53 2016 -0700

----------------------------------------------------------------------
 docs/_plugins/include_example.rb                | 25 +++++++++---
 docs/sql-programming-guide.md                   | 41 +++-----------------
 .../apache/spark/examples/sql/JavaSparkSQL.java |  5 +++
 examples/src/main/python/sql.py                 |  5 +++
 .../apache/spark/examples/sql/RDDRelation.scala | 10 ++++-
 5 files changed, 43 insertions(+), 43 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/bde1d6a6/docs/_plugins/include_example.rb
----------------------------------------------------------------------
diff --git a/docs/_plugins/include_example.rb b/docs/_plugins/include_example.rb
index f748582..3068888 100644
--- a/docs/_plugins/include_example.rb
+++ b/docs/_plugins/include_example.rb
@@ -32,8 +32,18 @@ module Jekyll
       @code_dir = File.join(site.source, config_dir)
 
       clean_markup = @markup.strip
-      @file = File.join(@code_dir, clean_markup)
-      @lang = clean_markup.split('.').last
+
+      parts = clean_markup.strip.split(' ')
+      if parts.length > 1 then
+        @snippet_label = ':' + parts[0]
+        snippet_file = parts[1]
+      else
+        @snippet_label = ''
+        snippet_file = parts[0]
+      end
+
+      @file = File.join(@code_dir, snippet_file)
+      @lang = snippet_file.split('.').last
 
       code = File.open(@file).read.encode("UTF-8")
       code = select_lines(code)
@@ -41,7 +51,7 @@ module Jekyll
       rendered_code = Pygments.highlight(code, :lexer => @lang)
 
       hint = "<div><small>Find full example code at " \
-        "\"examples/src/main/#{clean_markup}\" in the Spark 
repo.</small></div>"
+        "\"examples/src/main/#{snippet_file}\" in the Spark 
repo.</small></div>"
 
       rendered_code + hint
     end
@@ -66,13 +76,13 @@ module Jekyll
       # Select the array of start labels from code.
       startIndices = lines
         .each_with_index
-        .select { |l, i| l.include? "$example on$" }
+        .select { |l, i| l.include? "$example on#{@snippet_label}$" }
         .map { |l, i| i }
 
       # Select the array of end labels from code.
       endIndices = lines
         .each_with_index
-        .select { |l, i| l.include? "$example off$" }
+        .select { |l, i| l.include? "$example off#{@snippet_label}$" }
         .map { |l, i| i }
 
       raise "Start indices amount is not equal to end indices amount, see 
#{@file}." \
@@ -92,7 +102,10 @@ module Jekyll
             if start == endline
         lastIndex = endline
         range = Range.new(start + 1, endline - 1)
-        result += trim_codeblock(lines[range]).join
+        trimmed = trim_codeblock(lines[range])
+        # Filter out possible example tags of overlapped labels.
+        taggs_filtered = trimmed.select { |l| !l.include? '$example ' }
+        result += taggs_filtered.join
         result += "\n"
       end
       result

http://git-wip-us.apache.org/repos/asf/spark/blob/bde1d6a6/docs/sql-programming-guide.md
----------------------------------------------------------------------
diff --git a/docs/sql-programming-guide.md b/docs/sql-programming-guide.md
index 6c6bc8d..68419e1 100644
--- a/docs/sql-programming-guide.md
+++ b/docs/sql-programming-guide.md
@@ -63,52 +63,23 @@ Throughout this document, we will often refer to Scala/Java 
Datasets of `Row`s a
 <div class="codetabs">
 <div data-lang="scala"  markdown="1">
 
-The entry point into all functionality in Spark is the 
[`SparkSession`](api/scala/index.html#org.apache.spark.sql.SparkSession) class. 
To create a basic `SparkSession`, just use `SparkSession.build()`:
-
-{% highlight scala %}
-import org.apache.spark.sql.SparkSession
-
-val spark = SparkSession.build()
-  .master("local")
-  .appName("Word Count")
-  .config("spark.some.config.option", "some-value")
-  .getOrCreate()
-
-// this is used to implicitly convert an RDD to a DataFrame.
-import spark.implicits._
-{% endhighlight %}
+The entry point into all functionality in Spark is the 
[`SparkSession`](api/scala/index.html#org.apache.spark.sql.SparkSession) class. 
To create a basic `SparkSession`, just use `SparkSession.builder()`:
 
+{% include_example init_session 
scala/org/apache/spark/examples/sql/RDDRelation.scala %}
 </div>
 
 <div data-lang="java" markdown="1">
 
-The entry point into all functionality in Spark is the 
[`SparkSession`](api/java/index.html#org.apache.spark.sql.SparkSession) class. 
To create a basic `SparkSession`, just use `SparkSession.build()`:
+The entry point into all functionality in Spark is the 
[`SparkSession`](api/java/index.html#org.apache.spark.sql.SparkSession) class. 
To create a basic `SparkSession`, just use `SparkSession.builder()`:
 
-{% highlight java %}
-import org.apache.spark.sql.SparkSession
-
-SparkSession spark = SparkSession.build()
-  .master("local")
-  .appName("Word Count")
-  .config("spark.some.config.option", "some-value")
-  .getOrCreate();
-{% endhighlight %}
+{% include_example init_session 
java/org/apache/spark/examples/sql/JavaSparkSQL.java %}
 </div>
 
 <div data-lang="python"  markdown="1">
 
-The entry point into all functionality in Spark is the 
[`SparkSession`](api/python/pyspark.sql.html#pyspark.sql.SparkSession) class. 
To create a basic `SparkSession`, just use `SparkSession.build`:
-
-{% highlight python %}
-from pyspark.sql import SparkSession
-
-spark = SparkSession.build \
-  .master("local") \
-  .appName("Word Count") \
-  .config("spark.some.config.option", "some-value") \
-  .getOrCreate()
-{% endhighlight %}
+The entry point into all functionality in Spark is the 
[`SparkSession`](api/python/pyspark.sql.html#pyspark.sql.SparkSession) class. 
To create a basic `SparkSession`, just use `SparkSession.builder`:
 
+{% include_example init_session python/sql.py %}
 </div>
 
 <div data-lang="r"  markdown="1">

http://git-wip-us.apache.org/repos/asf/spark/blob/bde1d6a6/examples/src/main/java/org/apache/spark/examples/sql/JavaSparkSQL.java
----------------------------------------------------------------------
diff --git 
a/examples/src/main/java/org/apache/spark/examples/sql/JavaSparkSQL.java 
b/examples/src/main/java/org/apache/spark/examples/sql/JavaSparkSQL.java
index e512979..7fc6c00 100644
--- a/examples/src/main/java/org/apache/spark/examples/sql/JavaSparkSQL.java
+++ b/examples/src/main/java/org/apache/spark/examples/sql/JavaSparkSQL.java
@@ -26,7 +26,9 @@ import org.apache.spark.api.java.function.Function;
 
 import org.apache.spark.sql.Dataset;
 import org.apache.spark.sql.Row;
+// $example on:init_session$
 import org.apache.spark.sql.SparkSession;
+// $example off:init_session$
 
 public class JavaSparkSQL {
   public static class Person implements Serializable {
@@ -51,10 +53,13 @@ public class JavaSparkSQL {
   }
 
   public static void main(String[] args) throws Exception {
+    // $example on:init_session$
     SparkSession spark = SparkSession
       .builder()
       .appName("JavaSparkSQL")
+      .config("spark.some.config.option", "some-value")
       .getOrCreate();
+    // $example off:init_session$
 
     System.out.println("=== Data source: RDD ===");
     // Load a text file and convert each line to a Java Bean.

http://git-wip-us.apache.org/repos/asf/spark/blob/bde1d6a6/examples/src/main/python/sql.py
----------------------------------------------------------------------
diff --git a/examples/src/main/python/sql.py b/examples/src/main/python/sql.py
index ac72469..ea11d2c 100644
--- a/examples/src/main/python/sql.py
+++ b/examples/src/main/python/sql.py
@@ -20,15 +20,20 @@ from __future__ import print_function
 import os
 import sys
 
+# $example on:init_session$
 from pyspark.sql import SparkSession
+# $example off:init_session$
 from pyspark.sql.types import Row, StructField, StructType, StringType, 
IntegerType
 
 
 if __name__ == "__main__":
+    # $example on:init_session$
     spark = SparkSession\
         .builder\
         .appName("PythonSQL")\
+        .config("spark.some.config.option", "some-value")\
         .getOrCreate()
+    # $example off:init_session$
 
     # A list of Rows. Infer schema from the first row, create a DataFrame and 
print the schema
     rows = [Row(name="John", age=19), Row(name="Smith", age=23), 
Row(name="Sarah", age=18)]

http://git-wip-us.apache.org/repos/asf/spark/blob/bde1d6a6/examples/src/main/scala/org/apache/spark/examples/sql/RDDRelation.scala
----------------------------------------------------------------------
diff --git 
a/examples/src/main/scala/org/apache/spark/examples/sql/RDDRelation.scala 
b/examples/src/main/scala/org/apache/spark/examples/sql/RDDRelation.scala
index 1b019fb..deaa9f2 100644
--- a/examples/src/main/scala/org/apache/spark/examples/sql/RDDRelation.scala
+++ b/examples/src/main/scala/org/apache/spark/examples/sql/RDDRelation.scala
@@ -18,7 +18,10 @@
 // scalastyle:off println
 package org.apache.spark.examples.sql
 
-import org.apache.spark.sql.{SaveMode, SparkSession}
+import org.apache.spark.sql.SaveMode
+// $example on:init_session$
+import org.apache.spark.sql.SparkSession
+// $example off:init_session$
 
 // One method for defining the schema of an RDD is to make a case class with 
the desired column
 // names and types.
@@ -26,13 +29,16 @@ case class Record(key: Int, value: String)
 
 object RDDRelation {
   def main(args: Array[String]) {
+    // $example on:init_session$
     val spark = SparkSession
       .builder
-      .appName("RDDRelation")
+      .appName("Spark Examples")
+      .config("spark.some.config.option", "some-value")
       .getOrCreate()
 
     // Importing the SparkSession gives access to all the SQL functions and 
implicit conversions.
     import spark.implicits._
+    // $example off:init_session$
 
     val df = spark.createDataFrame((1 to 100).map(i => Record(i, s"val_$i")))
     // Any RDD containing case classes can be used to create a temporary view. 
 The schema of the


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to