Repository: spark
Updated Branches:
  refs/heads/branch-2.2 690f491f6 -> 1bcfa2a0c


Fix Java SimpleApp spark application

## What changes were proposed in this pull request?

Add missing import and missing parentheses to invoke `SparkSession::text()`.

## How was this patch tested?

Built and the code for this application, ran jekyll locally per docs/README.md.

Author: Christiam Camacho <cama...@ncbi.nlm.nih.gov>

Closes #18795 from christiam/master.

(cherry picked from commit dd72b10aba9997977f82605c5c1778f02dd1f91e)
Signed-off-by: Sean Owen <so...@cloudera.com>


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/1bcfa2a0
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/1bcfa2a0
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/1bcfa2a0

Branch: refs/heads/branch-2.2
Commit: 1bcfa2a0ccdc1d3c3c5075bc6e2838c69f5b2f7f
Parents: 690f491
Author: Christiam Camacho <cama...@ncbi.nlm.nih.gov>
Authored: Thu Aug 3 23:40:25 2017 +0100
Committer: Sean Owen <so...@cloudera.com>
Committed: Thu Aug 3 23:40:33 2017 +0100

----------------------------------------------------------------------
 docs/quick-start.md                            | 3 ++-
 docs/structured-streaming-programming-guide.md | 6 +++---
 2 files changed, 5 insertions(+), 4 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/1bcfa2a0/docs/quick-start.md
----------------------------------------------------------------------
diff --git a/docs/quick-start.md b/docs/quick-start.md
index cb5211a..c4c5a5a 100644
--- a/docs/quick-start.md
+++ b/docs/quick-start.md
@@ -297,12 +297,13 @@ We'll create a very simple Spark application, 
`SimpleApp.java`:
 {% highlight java %}
 /* SimpleApp.java */
 import org.apache.spark.sql.SparkSession;
+import org.apache.spark.sql.Dataset;
 
 public class SimpleApp {
   public static void main(String[] args) {
     String logFile = "YOUR_SPARK_HOME/README.md"; // Should be some file on 
your system
     SparkSession spark = SparkSession.builder().appName("Simple 
Application").getOrCreate();
-    Dataset<String> logData = spark.read.textFile(logFile).cache();
+    Dataset<String> logData = spark.read().textFile(logFile).cache();
 
     long numAs = logData.filter(s -> s.contains("a")).count();
     long numBs = logData.filter(s -> s.contains("b")).count();

http://git-wip-us.apache.org/repos/asf/spark/blob/1bcfa2a0/docs/structured-streaming-programming-guide.md
----------------------------------------------------------------------
diff --git a/docs/structured-streaming-programming-guide.md 
b/docs/structured-streaming-programming-guide.md
index 8f64faa..8367f5a 100644
--- a/docs/structured-streaming-programming-guide.md
+++ b/docs/structured-streaming-programming-guide.md
@@ -1041,8 +1041,8 @@ streamingDf.join(staticDf, "type", "right_join")  // 
right outer join with a sta
 <div data-lang="java"  markdown="1">
 
 {% highlight java %}
-Dataset<Row> staticDf = spark.read. ...;
-Dataset<Row> streamingDf = spark.readStream. ...;
+Dataset<Row> staticDf = spark.read(). ...;
+Dataset<Row> streamingDf = spark.readStream(). ...;
 streamingDf.join(staticDf, "type");         // inner equi-join with a static DF
 streamingDf.join(staticDf, "type", "right_join");  // right outer join with a 
static DF
 {% endhighlight %}
@@ -1087,7 +1087,7 @@ streamingDf
 <div data-lang="java"  markdown="1">
 
 {% highlight java %}
-Dataset<Row> streamingDf = spark.readStream. ...;  // columns: guid, 
eventTime, ...
+Dataset<Row> streamingDf = spark.readStream(). ...;  // columns: guid, 
eventTime, ...
 
 // Without watermark using guid column
 streamingDf.dropDuplicates("guid");


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to