Repository: spark
Updated Branches:
  refs/heads/master a0e46a0d2 -> 3c0156899


Update README to include DataFrames and zinc.

Also cut trailing whitespaces.

Author: Reynold Xin <r...@databricks.com>

Closes #6548 from rxin/readme and squashes the following commits:

630efc3 [Reynold Xin] Update README to include DataFrames and zinc.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/3c015689
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/3c015689
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/3c015689

Branch: refs/heads/master
Commit: 3c0156899dc1ec1f7dfe6d7c8af47fa6dc7d00bf
Parents: a0e46a0
Author: Reynold Xin <r...@databricks.com>
Authored: Sun May 31 23:55:45 2015 -0700
Committer: Reynold Xin <r...@databricks.com>
Committed: Sun May 31 23:55:45 2015 -0700

----------------------------------------------------------------------
 README.md | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/3c015689/README.md
----------------------------------------------------------------------
diff --git a/README.md b/README.md
index 9c09d40..380422c 100644
--- a/README.md
+++ b/README.md
@@ -3,8 +3,8 @@
 Spark is a fast and general cluster computing system for Big Data. It provides
 high-level APIs in Scala, Java, and Python, and an optimized engine that
 supports general computation graphs for data analysis. It also supports a
-rich set of higher-level tools including Spark SQL for SQL and structured
-data processing, MLlib for machine learning, GraphX for graph processing,
+rich set of higher-level tools including Spark SQL for SQL and DataFrames,
+MLlib for machine learning, GraphX for graph processing,
 and Spark Streaming for stream processing.
 
 <http://spark.apache.org/>
@@ -22,7 +22,7 @@ This README file only contains basic setup instructions.
 Spark is built using [Apache Maven](http://maven.apache.org/).
 To build Spark and its example programs, run:
 
-    mvn -DskipTests clean package
+    build/mvn -DskipTests clean package
 
 (You do not need to do this if you downloaded a pre-built package.)
 More detailed documentation is available from the project site, at
@@ -43,7 +43,7 @@ Try the following command, which should return 1000:
 Alternatively, if you prefer Python, you can use the Python shell:
 
     ./bin/pyspark
-    
+
 And run the following command, which should also return 1000:
 
     >>> sc.parallelize(range(1000)).count()
@@ -58,9 +58,9 @@ To run one of them, use `./bin/run-example <class> [params]`. 
For example:
 will run the Pi example locally.
 
 You can set the MASTER environment variable when running examples to submit
-examples to a cluster. This can be a mesos:// or spark:// URL, 
-"yarn-cluster" or "yarn-client" to run on YARN, and "local" to run 
-locally with one thread, or "local[N]" to run locally with N threads. You 
+examples to a cluster. This can be a mesos:// or spark:// URL,
+"yarn-cluster" or "yarn-client" to run on YARN, and "local" to run
+locally with one thread, or "local[N]" to run locally with N threads. You
 can also use an abbreviated class name if the class is in the `examples`
 package. For instance:
 
@@ -75,7 +75,7 @@ can be run using:
 
     ./dev/run-tests
 
-Please see the guidance on how to 
+Please see the guidance on how to
 [run tests for a module, or individual 
tests](https://cwiki.apache.org/confluence/display/SPARK/Useful+Developer+Tools).
 
 ## A Note About Hadoop Versions


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to