t;>> your application *without* any scripts whatsoever, and submit your JAR to
>>>>>> the SparkContext constructor, which will distribute it. You can launch
>>>>>> your
>>>>>> application with “scala”, “java”, or whatever too
which will distribute it. You can launch
>>>>> your
>>>>> application with “scala”, “java”, or whatever tool you’d prefer.
>>>>>
>>>>
>>>> I'm afraid what you said about 'simply run your application *without*
>>>>
gt;
>>>> Are the spark users supposed to create something like run-example for
>>>> their own jobs?
>>>>
>>>>
>>>>>
>>>>> Matei
>>>>>
>>>>> On Jan 8, 2014, at 8:06 PM, Aureliano Buend
ark examples work fine, but when I include
>>>>> a spark example in my jar and deploy it, I get this error for the very
>>>>> same
>>>>> example:
>>>>>
>>>>> WARN ClusterScheduler: Initial job has not accepted any resources;
>&
cubating.jar:/root/spark/assembltarget/scala-2.9.3/spark-assembly_2.9.3-0.8.1-incubating-hadoop1.0.4.jar
> org.apache.spark.examples.SparkPi `cat spark-ec2/cluster-url`
>
> And you'll get the error:
>
> WARN cluster.ClusterScheduler: Initial job has not accepted any resources;
> check y
9.3-0.8.1-incubating-hadoop1.0.4.jar
org.apache.spark.examples.SparkPi `cat spark-ec2/cluster-url`
And you'll get the error:
WARN cluster.ClusterScheduler: Initial job has not accepted any resources;
check your cluster UI to ensure that workers are registered and have
sufficient memory
While
are more env and java config variable about memory that I'm
>>> missing.
>>>
>>> By the way, that bit of the error asking to check the web UI, it's just
>>> redundant. The UI is of no help.
>>>
>>>
>>> On Wed, Jan 8
ano Buendia
>> wrote:
>>
>> The strange thing is that spark examples work fine, but when I include a
>> spark example in my jar and deploy it, I get this error for the very same
>> example:
>>
>> WARN ClusterScheduler: Initial job has not accepted any
PM, Aureliano Buendia wrote:
>
>> The strange thing is that spark examples work fine, but when I include a
>> spark example in my jar and deploy it, I get this error for the very same
>> example:
>>
>> WARN ClusterScheduler: Initial job has not accepted any resou
s that spark examples work fine, but when I include a
> spark example in my jar and deploy it, I get this error for the very same
> example:
>
> WARN ClusterScheduler: Initial job has not accepted any resources; check
> your cluster UI to ensure that workers are registered and have suf
ark examples work fine, but when I include a
> spark example in my jar and deploy it, I get this error for the very same
> example:
>
> WARN ClusterScheduler: Initial job has not accepted any resources; check your
> cluster UI to ensure that workers are registered and have sufficient mem
The strange thing is that spark examples work fine, but when I include a
spark example in my jar and deploy it, I get this error for the very same
example:
WARN ClusterScheduler: Initial job has not accepted any resources; check
your cluster UI to ensure that workers are registered and have
b UI, it's just
redundant. The UI is of no help.
On Wed, Jan 8, 2014 at 4:31 PM, Aureliano Buendia wrote:
> Hi,
>
>
> My spark cluster is not able to run a job due to this warning:
>
> WARN ClusterScheduler: Initial job has not accepted any resources; check
> your c
Hi,
My spark cluster is not able to run a job due to this warning:
WARN ClusterScheduler: Initial job has not accepted any resources; check
your cluster UI to ensure that workers are registered and have sufficient
memory
The workers have these status:
ALIVE2 (0 Used)6.3 GB (0.0 B Used)So
distributed in some 597 sequence files. My
>>> application does a flatmap on the union of all rdd's created from
>>> individual files. The flatmap statement throws java.lang.stackOverflowError
>>> with the default stack size. I increased the stack size to 1g (both syste
rows java.lang.stackOverflowError
>> with the default stack size. I increased the stack size to 1g (both system
>> and jvm). Now, it has started printing "Initial job has not accepted any
>> resources; check your cluster UI to ensure that workers are registered and
&
he flatmap statement throws java.lang.stackOverflowError
> with the default stack size. I increased the stack size to 1g (both system
> and jvm). Now, it has started printing "Initial job has not accepted any
> resources; check your cluster UI to ensure that workers are registered and
&g
d jvm). Now, it has started printing "Initial job has not accepted any
resources; check your cluster UI to ensure that workers are registered and
have sufficient memory" and is not moving forward. Just printing it in the
continuous loop. Any ideas? Or suggestions would help. Archit.
-Thx.
I figured it out, the problem is that the version of "spark-core" in my
project is different from the version in the pseudo-cluster.
On Fri, Dec 20, 2013 at 2:47 PM, Michael Kun Yang wrote:
> Thank you very much.
>
>
> On Friday, December 20, 2013, Christopher Nguyen wrote:
>
>> MichaelY, this s
Thank you very much.
On Friday, December 20, 2013, Christopher Nguyen wrote:
> MichaelY, this sort of thing where "it could be any of dozens of things"
> can usually be resolved by asking someone share your screen with you for 5
> minutes. It's far more productive than guessing over emails.
>
> I
MichaelY, this sort of thing where "it could be any of dozens of things"
can usually be resolved by asking someone share your screen with you for 5
minutes. It's far more productive than guessing over emails.
If @freeman is willing, you can send a private message to him to set that
up over Google
It's alive. I just restarted it, but it doesn't help.
On Friday, December 20, 2013, Michael (Bach) Bui wrote:
> Check if your worker is “alive”
> Also take a look at your master log and see if there is error message
> about worker.
>
> This usually can be fixed by restarting Spark.
>
>
>
>
>
> On
Check if your worker is “alive”
Also take a look at your master log and see if there is error message about
worker.
This usually can be fixed by restarting Spark.
On Dec 20, 2013, at 3:12 PM, Michael Kun Yang wrote:
> Hi,
>
> I really need help, I went through previous posts on the mailin
Hi,
I really need help, I went through previous posts on the mailing list but
still cannot resolve this problem.
It works when I use local[n] option, but error is occurred when I use
spark://master.local:7077.
I checked the UI, the workers are correctly registered and I set the
SPARK_MEM compati
24 matches
Mail list logo