ip the other two py
>>> files. Leave the main py file alone. Don't copy them to S3 because it seems
>>> that only local primary and additional py files are supported.
>>>
>>> ./bin/spark-submit --master spark://... --py-files
>>>
>>> --
/latest/submitting-applications.html> :
>>
>> ./bin/spark-submit --master
>> spark://ec2-54-51-23-172.eu-west-1.compute.amazonaws.com:5080
>> --py-files
>> s3n://AWS_ACCESS_KEY_ID:AWS_SECRET_ACCESS_KEY@mubucket
>> //weather_predict.zip
>>
>> But ge
://apache-spark-user-list.1001560.n3.nabble.com/Run-a-self-contained-Spark-app-on-a-Spark-standalone-cluster-tp26753p26761.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user
t; --py-files
> s3n://AWS_ACCESS_KEY_ID:AWS_SECRET_ACCESS_KEY@mubucket
> //weather_predict.zip
>
> But get: “Error: Must specify a primary resource (JAR or Python file)”
>
> Best,
> Kevin
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1
-
From: kevllino [mailto:kevin.e...@mail.dcu.ie]
Sent: Tuesday, April 12, 2016 5:07 PM
To: user@spark.apache.org
Subject: Run a self-contained Spark app on a Spark standalone cluster
Hi,
I need to know how to run a self-contained Spark app (3 python files) in a
Spark standalone cluster. Can I move
ark://ec2-54-51-23-172.eu-west-1.compute.amazonaws.com:5080
> --py-files
> s3n://AWS_ACCESS_KEY_ID:AWS_SECRET_ACCESS_KEY@mubucket
> //weather_predict.zip
>
> But get: “Error: Must specify a primary resource (JAR or Python file)”
>
> Best,
> Kevin
>
>
>
> --
> View this message in context:
>
Python file)”
Best,
Kevin
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Run-a-self-contained-Spark-app-on-a-Spark-standalone-cluster-tp26753.html
Sent from the Apache Spark User List mailing list archive at Nabb