Repository: spark
Updated Branches:
  refs/heads/master 2c3f83c34 -> 797f8a000


[SPARK-6402][DOC] - Remove some refererences to shark in docs and ec2

EC2 script and job scheduling documentation still refered to Shark.
I removed these references.

I also removed a remaining `SHARK_VERSION` variable from `ec2-variables.sh`.

Author: Pierre Borckmans <pierre.borckm...@realimpactanalytics.com>

Closes #5083 from pierre-borckmans/remove_refererences_to_shark_in_docs and 
squashes the following commits:

4e90ffc [Pierre Borckmans] Removed deprecated SHARK_VERSION
caea407 [Pierre Borckmans] Remove shark reference from ec2 script doc
196c744 [Pierre Borckmans] Removed references to Shark


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/797f8a00
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/797f8a00
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/797f8a00

Branch: refs/heads/master
Commit: 797f8a000773d848fa52c7fe2eb1b5e5e7f6c55a
Parents: 2c3f83c
Author: Pierre Borckmans <pierre.borckm...@realimpactanalytics.com>
Authored: Thu Mar 19 08:02:06 2015 -0400
Committer: Sean Owen <so...@cloudera.com>
Committed: Thu Mar 19 08:02:06 2015 -0400

----------------------------------------------------------------------
 docs/ec2-scripts.md                                | 2 +-
 docs/job-scheduling.md                             | 6 ++----
 ec2/deploy.generic/root/spark-ec2/ec2-variables.sh | 1 -
 3 files changed, 3 insertions(+), 6 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/797f8a00/docs/ec2-scripts.md
----------------------------------------------------------------------
diff --git a/docs/ec2-scripts.md b/docs/ec2-scripts.md
index 8c9a1e1..7f60f82 100644
--- a/docs/ec2-scripts.md
+++ b/docs/ec2-scripts.md
@@ -5,7 +5,7 @@ title: Running Spark on EC2
 
 The `spark-ec2` script, located in Spark's `ec2` directory, allows you
 to launch, manage and shut down Spark clusters on Amazon EC2. It automatically
-sets up Spark, Shark and HDFS on the cluster for you. This guide describes 
+sets up Spark and HDFS on the cluster for you. This guide describes 
 how to use `spark-ec2` to launch clusters, how to run jobs on them, and how 
 to shut them down. It assumes you've already signed up for an EC2 account 
 on the [Amazon Web Services site](http://aws.amazon.com/).

http://git-wip-us.apache.org/repos/asf/spark/blob/797f8a00/docs/job-scheduling.md
----------------------------------------------------------------------
diff --git a/docs/job-scheduling.md b/docs/job-scheduling.md
index 5295e35..963e88a 100644
--- a/docs/job-scheduling.md
+++ b/docs/job-scheduling.md
@@ -14,8 +14,7 @@ runs an independent set of executor processes. The cluster 
managers that Spark r
 facilities for [scheduling across 
applications](#scheduling-across-applications). Second,
 _within_ each Spark application, multiple "jobs" (Spark actions) may be 
running concurrently
 if they were submitted by different threads. This is common if your 
application is serving requests
-over the network; for example, the [Shark](http://shark.cs.berkeley.edu) 
server works this way. Spark
-includes a [fair scheduler](#scheduling-within-an-application) to schedule 
resources within each SparkContext.
+over the network. Spark includes a [fair 
scheduler](#scheduling-within-an-application) to schedule resources within each 
SparkContext.
 
 # Scheduling Across Applications
 
@@ -52,8 +51,7 @@ an application to gain back cores on one node when it has 
work to do. To use thi
 
 Note that none of the modes currently provide memory sharing across 
applications. If you would like to share
 data this way, we recommend running a single server application that can serve 
multiple requests by querying
-the same RDDs. For example, the [Shark](http://shark.cs.berkeley.edu) JDBC 
server works this way for SQL
-queries. In future releases, in-memory storage systems such as 
[Tachyon](http://tachyon-project.org) will
+the same RDDs. In future releases, in-memory storage systems such as 
[Tachyon](http://tachyon-project.org) will
 provide another approach to share RDDs.
 
 ## Dynamic Resource Allocation

http://git-wip-us.apache.org/repos/asf/spark/blob/797f8a00/ec2/deploy.generic/root/spark-ec2/ec2-variables.sh
----------------------------------------------------------------------
diff --git a/ec2/deploy.generic/root/spark-ec2/ec2-variables.sh 
b/ec2/deploy.generic/root/spark-ec2/ec2-variables.sh
index 0857657..4f3e8da 100644
--- a/ec2/deploy.generic/root/spark-ec2/ec2-variables.sh
+++ b/ec2/deploy.generic/root/spark-ec2/ec2-variables.sh
@@ -25,7 +25,6 @@ export MAPRED_LOCAL_DIRS="{{mapred_local_dirs}}"
 export SPARK_LOCAL_DIRS="{{spark_local_dirs}}"
 export MODULES="{{modules}}"
 export SPARK_VERSION="{{spark_version}}"
-export SHARK_VERSION="{{shark_version}}"
 export TACHYON_VERSION="{{tachyon_version}}"
 export HADOOP_MAJOR_VERSION="{{hadoop_major_version}}"
 export SWAP_MB="{{swap}}"


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to