I figured it out, the problem is that the version of "spark-core" in my
project is different from the version in the pseudo-cluster.


On Fri, Dec 20, 2013 at 2:47 PM, Michael Kun Yang <kuny...@stanford.edu>wrote:

> Thank you very much.
>
>
> On Friday, December 20, 2013, Christopher Nguyen wrote:
>
>> MichaelY, this sort of thing where "it could be any of dozens of things"
>> can usually be resolved by asking someone share your screen with you for 5
>> minutes. It's far more productive than guessing over emails.
>>
>> If @freeman is willing, you can send a private message to him to set that
>> up over Google Hangout.
>>
>> --
>> Christopher T. Nguyen
>> Co-founder & CEO, Adatao <http://adatao.com>
>> linkedin.com/in/ctnguyen
>>
>>
>>
>> On Fri, Dec 20, 2013 at 1:57 PM, Michael Kun Yang 
>> <kuny...@stanford.edu>wrote:
>>
>>> It's alive. I just restarted it, but it doesn't help.
>>>
>>>
>>> On Friday, December 20, 2013, Michael (Bach) Bui wrote:
>>>
>>>> Check if your worker is “alive”
>>>> Also take a look at your master log and see if there is error message
>>>> about worker.
>>>>
>>>> This usually can be fixed by restarting Spark.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On Dec 20, 2013, at 3:12 PM, Michael Kun Yang <kuny...@stanford.edu>
>>>> wrote:
>>>>
>>>> Hi,
>>>>
>>>> I really need help, I went through previous posts on the mailing list
>>>> but still cannot resolve this problem.
>>>>
>>>> It works when I use local[n] option, but error is occurred when I use
>>>> spark://master.local:7077.
>>>>
>>>> I checked the UI, the workers are correctly registered and I set the
>>>> SPARK_MEM compatible with my machine.
>>>>
>>>> Best
>>>>
>>>>
>>>>
>>

Reply via email to