Just follow the docs at 
http://spark.incubator.apache.org/docs/latest/quick-start.html#a-standalone-app-in-scala
 for how to run an application. Spark is designed so that you can simply run 
your application *without* any scripts whatsoever, and submit your JAR to the 
SparkContext constructor, which will distribute it. You can launch your 
application with “scala”, “java”, or whatever tool you’d prefer.

Matei

On Jan 8, 2014, at 8:26 PM, Aureliano Buendia <buendia...@gmail.com> wrote:

> 
> 
> 
> On Thu, Jan 9, 2014 at 4:11 AM, Matei Zaharia <matei.zaha...@gmail.com> wrote:
> Oh, you shouldn’t use spark-class for your own classes. Just build your job 
> separately and submit it by running it with “java” and creating a 
> SparkContext in it. spark-class is designed to run classes internal to the 
> Spark project.
> 
> Really? Apparently Eugen runs his jobs by:
> 
> $SPARK_HOME/spark-class SPARK_CLASSPATH=PathToYour.jar com.myproject.MyJob
> 
> , as he instructed me here to do this.
> 
> I have to say while spark documentation is not sparse, it does not address 
> enough, and as you can see the community is confused.
> 
> Are the spark users supposed to create something like run-example for their 
> own jobs?
>  
> 
> Matei
> 
> On Jan 8, 2014, at 8:06 PM, Aureliano Buendia <buendia...@gmail.com> wrote:
> 
>> 
>> 
>> 
>> On Thu, Jan 9, 2014 at 3:59 AM, Matei Zaharia <matei.zaha...@gmail.com> 
>> wrote:
>> Have you looked at the cluster UI, and do you see any workers registered 
>> there, and your application under running applications? Maybe you typed in 
>> the wrong master URL or something like that.
>> 
>> No, it's automated: cat spark-ec2/cluster-url
>> 
>> I think the problem might be caused by spark-class script. It seems to 
>> assign too much memory.
>> 
>> I forgot the fact that run-example doesn't use spark-class.
>>  
>> 
>> Matei
>> 
>> On Jan 8, 2014, at 7:07 PM, Aureliano Buendia <buendia...@gmail.com> wrote:
>> 
>>> The strange thing is that spark examples work fine, but when I include a 
>>> spark example in my jar and deploy it, I get this error for the very same 
>>> example:
>>> 
>>> WARN ClusterScheduler: Initial job has not accepted any resources; check 
>>> your cluster UI to ensure that workers are registered and have sufficient 
>>> memory
>>> 
>>> My jar is deployed to master and then to workers by spark-ec2/copy-dir. Why 
>>> would including the example in my jar cause this error?
>>> 
>>> 
>>> 
>>> On Thu, Jan 9, 2014 at 12:41 AM, Aureliano Buendia <buendia...@gmail.com> 
>>> wrote:
>>> Could someone explain how SPARK_MEM, SPARK_WORKER_MEMORY and 
>>> spark.executor.memory should be related so that this non helpful error 
>>> doesn't occur?
>>> 
>>> Maybe there are more env and java config variable about memory that I'm 
>>> missing.
>>> 
>>> By the way, that bit of the error asking to check the web UI, it's just 
>>> redundant. The UI is of no help.
>>> 
>>> 
>>> On Wed, Jan 8, 2014 at 4:31 PM, Aureliano Buendia <buendia...@gmail.com> 
>>> wrote:
>>> Hi,
>>> 
>>> 
>>> My spark cluster is not able to run a job due to this warning:
>>> 
>>> WARN ClusterScheduler: Initial job has not accepted any resources; check 
>>> your cluster UI to ensure that workers are registered and have sufficient 
>>> memory
>>> 
>>> The workers have these status:
>>> 
>>> ALIVE        2 (0 Used)     6.3 GB (0.0 B Used)
>>> So there must be plenty of memory available despite the warning message. 
>>> I'm using default spark config, is there a config parameter that needs 
>>> changing for this to work?
>>> 
>>> 
>> 
>> 
> 
> 

Reply via email to