Hi,
This is the command I am using for submitting my application, SimpleApp:
./bin/spark-submit --class org.apache.spark.examples.SimpleApp
--deploy-mode client --master spark://karthik:7077
$SPARK_HOME/examples/*/scala-*/spark-examples-*.jar /text-data
On Thu, Sep 25, 2014 at 6:52 AM, Tobias P
Hi,
I assume you unintentionally did not reply to the list, so I'm adding it
back to CC.
How do you submit your job to the cluster?
Tobias
On Thu, Sep 25, 2014 at 2:21 AM, rapelly kartheek
wrote:
> How do I find out whether a node in the cluster is a master or slave??
> Till now I was thinki
Thank you Soumya Simantha and Tobias. I've deleted the contents of the work
folder in all the nodes.
Now its working perfectly as it was before.
Thank you
Karthik
On Fri, Sep 19, 2014 at 4:46 PM, Soumya Simanta
wrote:
> One possible reason is maybe that the checkpointing directory
> $SPARK_HOME
One possible reason is maybe that the checkpointing directory
$SPARK_HOME/work is rsynced as well.
Try emptying the contents of the work folder on each node and try again.
On Fri, Sep 19, 2014 at 4:53 AM, rapelly kartheek
wrote:
> I
> * followed this command:rsync -avL --progress path/to/spark
I
* followed this command:rsync -avL --progress path/to/spark-1.0.0
username@destinationhostname:*
*path/to/destdirectory. Anyway, for now, I did it individually for each
node.*
I have copied to each node at a time individually using the above command.
So, I guess the copying may not contain any
Hi,
On Fri, Sep 19, 2014 at 5:17 PM, rapelly kartheek
wrote:
> > ,
>
> * you have copied a lot of files from various hosts to
> username@slave3:path*
> only from one node to all the other nodes...
>
I don't think rsync can do that in one command as you described. My guess
is that now you have a
-- Forwarded message --
From: rapelly kartheek
Date: Fri, Sep 19, 2014 at 1:51 PM
Subject: Re: rsync problem
To: Tobias Pfeiffer
any idea why the cluster is dying down???
On Fri, Sep 19, 2014 at 1:47 PM, rapelly kartheek
wrote:
> ,
>
>
> * you have copied a lot o
,
* you have copied a lot of files from various hosts to username@slave3:path*
only from one node to all the other nodes...
On Fri, Sep 19, 2014 at 1:45 PM, rapelly kartheek
wrote:
> Hi Tobias,
>
> I've copied the files from master to all the slaves.
>
> On Fri, Sep 19, 2014 at 1:37 PM, Tobias
Hi Tobias,
I've copied the files from master to all the slaves.
On Fri, Sep 19, 2014 at 1:37 PM, Tobias Pfeiffer wrote:
> Hi,
>
> On Fri, Sep 19, 2014 at 5:02 PM, rapelly kartheek > wrote:
>>
>> This worked perfectly. But, I wanted to simultaneously rsync all the
>> slaves. So, added the other
Hi,
On Fri, Sep 19, 2014 at 5:02 PM, rapelly kartheek
wrote:
>
> This worked perfectly. But, I wanted to simultaneously rsync all the
> slaves. So, added the other slaves as following:
>
> rsync -avL --progress path/to/spark-1.0.0 username@destinationhostname
> :path/to/destdirectory username@sl
Hi,
I'd made some modifications to the spark source code in the master and
reflected them to the slaves using rsync.
I followed this command:
rsync -avL --progress path/to/spark-1.0.0 username@destinationhostname
:path/to/destdirectory.
This worked perfectly. But, I wanted to simultaneously rs
11 matches
Mail list logo