Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/16856
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/16856#discussion_r104600900
--- Diff: docs/quick-start.md ---
@@ -438,8 +412,7 @@ Lines with a: 46, Lines with b: 23
# Where to Go from Here
Congratulations on running your
Github user sameeragarwal commented on a diff in the pull request:
https://github.com/apache/spark/pull/16856#discussion_r104566960
--- Diff: docs/quick-start.md ---
@@ -438,7 +412,7 @@ Lines with a: 46, Lines with b: 23
# Where to Go from Here
Congratulations on running y
Github user ueshin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16856#discussion_r104560265
--- Diff: docs/quick-start.md ---
@@ -438,7 +412,7 @@ Lines with a: 46, Lines with b: 23
# Where to Go from Here
Congratulations on running your fir
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16856#discussion_r104255799
--- Diff: docs/quick-start.md ---
@@ -65,41 +66,41 @@ res3: Long = 15
./bin/pyspark
-Spark's primary abstraction is a distributed col
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16856#discussion_r104255651
--- Diff: docs/quick-start.md ---
@@ -10,12 +10,13 @@ description: Quick start tutorial for Spark
SPARK_VERSION_SHORT
This tutorial provides a quick int
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16856#discussion_r104256037
--- Diff: docs/quick-start.md ---
@@ -137,37 +138,24 @@ res6: Array[(String, Int)] = Array((means,1),
(under,2), (this,3), (Because,1),
{% h
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16856#discussion_r104255711
--- Diff: docs/quick-start.md ---
@@ -29,28 +30,28 @@ or Python. Start it by running the following in the
Spark directory:
./bin/spark-shell
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/16856#discussion_r102616268
--- Diff: docs/quick-start.md ---
@@ -211,7 +199,7 @@ a cluster, as described in the [programming
guide](programming-guide.html#initia
It may seem si
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/16856#discussion_r102616245
--- Diff: docs/quick-start.md ---
@@ -211,7 +199,7 @@ a cluster, as described in the [programming
guide](programming-guide.html#initia
It may seem si
Github user heuermh commented on a diff in the pull request:
https://github.com/apache/spark/pull/16856#discussion_r101275550
--- Diff: docs/programming-guide.md ---
@@ -153,13 +149,12 @@ JavaSparkContext sc = new JavaSparkContext(conf);
-The first thing a Spar
Github user heuermh commented on a diff in the pull request:
https://github.com/apache/spark/pull/16856#discussion_r101275374
--- Diff: docs/programming-guide.md ---
@@ -125,15 +124,12 @@ $ PYSPARK_PYTHON=/opt/pypy-2.5/bin/pypy
bin/spark-submit examples/src/main/pytho
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16856#discussion_r100277353
--- Diff: docs/programming-guide.md ---
@@ -76,10 +75,10 @@ In addition, if you wish to access an HDFS cluster, you
need to add a dependency
Final
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/16856#discussion_r100213922
--- Diff: docs/programming-guide.md ---
@@ -244,13 +239,13 @@ use IPython, set the `PYSPARK_DRIVER_PYTHON` variable
to `ipython` when running
$ PYSPA
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/16856#discussion_r100213837
--- Diff: docs/programming-guide.md ---
@@ -77,9 +76,9 @@ In addition, if you wish to access an HDFS cluster, you
need to add a dependency
Finally, y
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16856#discussion_r100089351
--- Diff: docs/programming-guide.md ---
@@ -77,9 +76,9 @@ In addition, if you wish to access an HDFS cluster, you
need to add a dependency
Finally, you
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16856#discussion_r100089566
--- Diff: docs/programming-guide.md ---
@@ -244,13 +239,13 @@ use IPython, set the `PYSPARK_DRIVER_PYTHON` variable
to `ipython` when running
$ PYSPARK_
GitHub user cloud-fan opened a pull request:
https://github.com/apache/spark/pull/16856
[SPARK-19516][DOC] update public doc to use SparkSession instead of
SparkContext
## What changes were proposed in this pull request?
After Spark 2.0, `SparkSession` becomes the new entry
18 matches
Mail list logo