spark git commit: [SPARK-16303][DOCS][EXAMPLES] Minor Scala/Java example update

2016-07-19 Thread yhuai
Repository: spark
Updated Branches:
  refs/heads/branch-2.0 24ea87519 -> ef2a6f131


[SPARK-16303][DOCS][EXAMPLES] Minor Scala/Java example update

## What changes were proposed in this pull request?

This PR moves one and the last hard-coded Scala example snippet from the SQL 
programming guide into `SparkSqlExample.scala`. It also renames all Scala/Java 
example files so that all "Sql" in the file names are updated to "SQL".

## How was this patch tested?

Manually verified the generated HTML page.

Author: Cheng Lian 

Closes #14245 from liancheng/minor-scala-example-update.

(cherry picked from commit 1426a080528bdb470b5e81300d892af45dd188bf)
Signed-off-by: Yin Huai 


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/ef2a6f13
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/ef2a6f13
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/ef2a6f13

Branch: refs/heads/branch-2.0
Commit: ef2a6f1310777bb6ea2b157a873c3785231b104a
Parents: 24ea875
Author: Cheng Lian 
Authored: Mon Jul 18 23:07:59 2016 -0700
Committer: Yin Huai 
Committed: Mon Jul 18 23:08:11 2016 -0700

--
 docs/sql-programming-guide.md   |  57 ++--
 .../examples/sql/JavaSQLDataSourceExample.java  | 217 
 .../spark/examples/sql/JavaSparkSQLExample.java | 336 +++
 .../spark/examples/sql/JavaSparkSqlExample.java | 336 ---
 .../examples/sql/JavaSqlDataSourceExample.java  | 217 
 .../examples/sql/SQLDataSourceExample.scala | 148 
 .../spark/examples/sql/SparkSQLExample.scala| 254 ++
 .../spark/examples/sql/SparkSqlExample.scala| 254 --
 .../examples/sql/SqlDataSourceExample.scala | 148 
 9 files changed, 983 insertions(+), 984 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/ef2a6f13/docs/sql-programming-guide.md
--
diff --git a/docs/sql-programming-guide.md b/docs/sql-programming-guide.md
index a4127da..a88efb7 100644
--- a/docs/sql-programming-guide.md
+++ b/docs/sql-programming-guide.md
@@ -65,14 +65,14 @@ Throughout this document, we will often refer to Scala/Java 
Datasets of `Row`s a
 
 The entry point into all functionality in Spark is the 
[`SparkSession`](api/scala/index.html#org.apache.spark.sql.SparkSession) class. 
To create a basic `SparkSession`, just use `SparkSession.builder()`:
 
-{% include_example init_session 
scala/org/apache/spark/examples/sql/SparkSqlExample.scala %}
+{% include_example init_session 
scala/org/apache/spark/examples/sql/SparkSQLExample.scala %}
 
 
 
 
 The entry point into all functionality in Spark is the 
[`SparkSession`](api/java/index.html#org.apache.spark.sql.SparkSession) class. 
To create a basic `SparkSession`, just use `SparkSession.builder()`:
 
-{% include_example init_session 
java/org/apache/spark/examples/sql/JavaSparkSqlExample.java %}
+{% include_example init_session 
java/org/apache/spark/examples/sql/JavaSparkSQLExample.java %}
 
 
 
@@ -105,7 +105,7 @@ from a Hive table, or from [Spark data 
sources](#data-sources).
 
 As an example, the following creates a DataFrame based on the content of a 
JSON file:
 
-{% include_example create_df 
scala/org/apache/spark/examples/sql/SparkSqlExample.scala %}
+{% include_example create_df 
scala/org/apache/spark/examples/sql/SparkSQLExample.scala %}
 
 
 
@@ -114,7 +114,7 @@ from a Hive table, or from [Spark data 
sources](#data-sources).
 
 As an example, the following creates a DataFrame based on the content of a 
JSON file:
 
-{% include_example create_df 
java/org/apache/spark/examples/sql/JavaSparkSqlExample.java %}
+{% include_example create_df 
java/org/apache/spark/examples/sql/JavaSparkSQLExample.java %}
 
 
 
@@ -155,7 +155,7 @@ Here we include some basic examples of structured data 
processing using Datasets
 
 
 
-{% include_example untyped_ops 
scala/org/apache/spark/examples/sql/SparkSqlExample.scala %}
+{% include_example untyped_ops 
scala/org/apache/spark/examples/sql/SparkSQLExample.scala %}
 
 For a complete list of the types of operations that can be performed on a 
Dataset refer to the [API 
Documentation](api/scala/index.html#org.apache.spark.sql.Dataset).
 
@@ -164,7 +164,7 @@ In addition to simple column references and expressions, 
Datasets also have a ri
 
 
 
-{% include_example untyped_ops 
java/org/apache/spark/examples/sql/JavaSparkSqlExample.java %}
+{% include_example untyped_ops 
java/org/apache/spark/examples/sql/JavaSparkSQLExample.java %}
 
 For a complete list of the types of operations that can be performed on a 
Dataset refer to the [API 
Documentation](api/java/org/apache/spark/sql/Dataset.html).
 
@@ 

spark git commit: [SPARK-16303][DOCS][EXAMPLES] Minor Scala/Java example update

2016-07-19 Thread yhuai
Repository: spark
Updated Branches:
  refs/heads/master e5fbb182c -> 1426a0805


[SPARK-16303][DOCS][EXAMPLES] Minor Scala/Java example update

## What changes were proposed in this pull request?

This PR moves one and the last hard-coded Scala example snippet from the SQL 
programming guide into `SparkSqlExample.scala`. It also renames all Scala/Java 
example files so that all "Sql" in the file names are updated to "SQL".

## How was this patch tested?

Manually verified the generated HTML page.

Author: Cheng Lian 

Closes #14245 from liancheng/minor-scala-example-update.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/1426a080
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/1426a080
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/1426a080

Branch: refs/heads/master
Commit: 1426a080528bdb470b5e81300d892af45dd188bf
Parents: e5fbb18
Author: Cheng Lian 
Authored: Mon Jul 18 23:07:59 2016 -0700
Committer: Yin Huai 
Committed: Mon Jul 18 23:07:59 2016 -0700

--
 docs/sql-programming-guide.md   |  57 ++--
 .../examples/sql/JavaSQLDataSourceExample.java  | 217 
 .../spark/examples/sql/JavaSparkSQLExample.java | 336 +++
 .../spark/examples/sql/JavaSparkSqlExample.java | 336 ---
 .../examples/sql/JavaSqlDataSourceExample.java  | 217 
 .../examples/sql/SQLDataSourceExample.scala | 148 
 .../spark/examples/sql/SparkSQLExample.scala| 254 ++
 .../spark/examples/sql/SparkSqlExample.scala| 254 --
 .../examples/sql/SqlDataSourceExample.scala | 148 
 9 files changed, 983 insertions(+), 984 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/1426a080/docs/sql-programming-guide.md
--
diff --git a/docs/sql-programming-guide.md b/docs/sql-programming-guide.md
index 4413fdd..71f3ee4 100644
--- a/docs/sql-programming-guide.md
+++ b/docs/sql-programming-guide.md
@@ -65,14 +65,14 @@ Throughout this document, we will often refer to Scala/Java 
Datasets of `Row`s a
 
 The entry point into all functionality in Spark is the 
[`SparkSession`](api/scala/index.html#org.apache.spark.sql.SparkSession) class. 
To create a basic `SparkSession`, just use `SparkSession.builder()`:
 
-{% include_example init_session 
scala/org/apache/spark/examples/sql/SparkSqlExample.scala %}
+{% include_example init_session 
scala/org/apache/spark/examples/sql/SparkSQLExample.scala %}
 
 
 
 
 The entry point into all functionality in Spark is the 
[`SparkSession`](api/java/index.html#org.apache.spark.sql.SparkSession) class. 
To create a basic `SparkSession`, just use `SparkSession.builder()`:
 
-{% include_example init_session 
java/org/apache/spark/examples/sql/JavaSparkSqlExample.java %}
+{% include_example init_session 
java/org/apache/spark/examples/sql/JavaSparkSQLExample.java %}
 
 
 
@@ -105,7 +105,7 @@ from a Hive table, or from [Spark data 
sources](#data-sources).
 
 As an example, the following creates a DataFrame based on the content of a 
JSON file:
 
-{% include_example create_df 
scala/org/apache/spark/examples/sql/SparkSqlExample.scala %}
+{% include_example create_df 
scala/org/apache/spark/examples/sql/SparkSQLExample.scala %}
 
 
 
@@ -114,7 +114,7 @@ from a Hive table, or from [Spark data 
sources](#data-sources).
 
 As an example, the following creates a DataFrame based on the content of a 
JSON file:
 
-{% include_example create_df 
java/org/apache/spark/examples/sql/JavaSparkSqlExample.java %}
+{% include_example create_df 
java/org/apache/spark/examples/sql/JavaSparkSQLExample.java %}
 
 
 
@@ -155,7 +155,7 @@ Here we include some basic examples of structured data 
processing using Datasets
 
 
 
-{% include_example untyped_ops 
scala/org/apache/spark/examples/sql/SparkSqlExample.scala %}
+{% include_example untyped_ops 
scala/org/apache/spark/examples/sql/SparkSQLExample.scala %}
 
 For a complete list of the types of operations that can be performed on a 
Dataset refer to the [API 
Documentation](api/scala/index.html#org.apache.spark.sql.Dataset).
 
@@ -164,7 +164,7 @@ In addition to simple column references and expressions, 
Datasets also have a ri
 
 
 
-{% include_example untyped_ops 
java/org/apache/spark/examples/sql/JavaSparkSqlExample.java %}
+{% include_example untyped_ops 
java/org/apache/spark/examples/sql/JavaSparkSQLExample.java %}
 
 For a complete list of the types of operations that can be performed on a 
Dataset refer to the [API 
Documentation](api/java/org/apache/spark/sql/Dataset.html).
 
@@ -249,13 +249,13 @@ In addition to simple column references and expressions, 
DataFrames also have a
 
 The `sql` function on a