Thank you Soumya Simantha and Tobias. I've deleted the contents of the work
folder in all the nodes.
Now its working perfectly as it was before.

Thank you
Karthik

On Fri, Sep 19, 2014 at 4:46 PM, Soumya Simanta <soumya.sima...@gmail.com>
wrote:

> One possible reason is maybe that the checkpointing directory
> $SPARK_HOME/work is rsynced as well.
> Try emptying the contents of the work folder on each node and try again.
>
>
>
> On Fri, Sep 19, 2014 at 4:53 AM, rapelly kartheek <kartheek.m...@gmail.com
> > wrote:
>
>> I
>> * followed this command:rsync -avL --progress path/to/spark-1.0.0
>> username@destinationhostname:*
>>
>>
>> *path/to/destdirectory. Anyway, for now, I did it individually for each
>> node.*
>>
>> I have copied to each node at a time individually using the above
>> command. So, I guess the copying may not contain any mixture of files.
>> Also, as of now, I am not facing any MethodNotFound exceptions. But, there
>> is no job execution taking place.
>>
>> After sometime, one by one, each goes down and the cluster shuts down.
>>
>> On Fri, Sep 19, 2014 at 2:15 PM, Tobias Pfeiffer <t...@preferred.jp>
>> wrote:
>>
>>> Hi,
>>>
>>> On Fri, Sep 19, 2014 at 5:17 PM, rapelly kartheek <
>>> kartheek.m...@gmail.com> wrote:
>>>
>>>> > ,
>>>>
>>>> * you have copied a lot of files from various hosts to
>>>> username@slave3:path*
>>>> only from one node to all the other nodes...
>>>>
>>>
>>> I don't think rsync can do that in one command as you described. My
>>> guess is that now you have a wild mixture of jar files all across your
>>> cluster which will lead to fancy exceptions like MethodNotFound etc.,
>>> that's maybe why your cluster is not working correctly.
>>>
>>> Tobias
>>>
>>>
>>>
>>
>

Reply via email to