On a rough note,

Step 1: Install Hadoop2.x in all the machines on cluster
Step 2: Check if Hadoop cluster is working
Step 3: Setup Apache Spark as given on the documentation page for the
cluster.
Check the status of cluster on the master UI

As it is some data mining project, configure Hive too.
You can use Spark SQL or AMPLAB Shark as a database store

On Mon, Dec 8, 2014 at 11:01 PM, riginos <samarasrigi...@gmail.com> wrote:

> My thesis is related to big data mining and I have a cluster in the
> laboratory of my university. My task is to install apache spark on it and
> use it for extraction purposes. Is there any understandable guidance on how
> to do this ?
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Install-Apache-Spark-on-a-Cluster-tp20580.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to