Hi Nick,

the localizedPath has to be not null, that's why the requirement fails.

In the SparkConf used by the spark-submit (default in conf/spark-default.conf), do you have all properties defined, especially spark.yarn.keytab ?

Thanks,
Regards
JB

On 12/11/2015 05:49 PM, Afshartous, Nick wrote:

Hi,


I'm trying to run a streaming job on a single node EMR 4.1/Spark 1.5
cluster.  Its throwing an IllegalArgumentException right away on the submit.

Attaching full output from console.


Thanks for any insights.

--

     Nick



15/12/11 16:44:43 WARN util.NativeCodeLoader: Unable to load
native-hadoop library for your platform... using builtin-java classes
where applicable
15/12/11 16:44:43 INFO client.RMProxy: Connecting to ResourceManager at
ip-10-247-129-50.ec2.internal/10.247.129.50:8032
15/12/11 16:44:43 INFO yarn.Client: Requesting a new application from
cluster with 1 NodeManagers
15/12/11 16:44:43 INFO yarn.Client: Verifying our application has not
requested more than the maximum memory capability of the cluster (54272
MB per container)
15/12/11 16:44:43 INFO yarn.Client: Will allocate AM container, with
11264 MB memory including 1024 MB overhead
15/12/11 16:44:43 INFO yarn.Client: Setting up container launch context
for our AM
15/12/11 16:44:43 INFO yarn.Client: Setting up the launch environment
for our AM container
15/12/11 16:44:43 INFO yarn.Client: Preparing resources for our AM container
15/12/11 16:44:44 INFO yarn.Client: Uploading resource
file:/usr/lib/spark/lib/spark-assembly-1.5.0-hadoop2.6.0-amzn-1.jar ->
hdfs://ip-10-247-129-50.ec2.internal:8020/user/hadoop/.sparkStaging/application_1447\
442727308_0126/spark-assembly-1.5.0-hadoop2.6.0-amzn-1.jar
15/12/11 16:44:44 INFO metrics.MetricsSaver: MetricsConfigRecord
disabledInCluster: false instanceEngineCycleSec: 60
clusterEngineCycleSec: 60 disableClusterEngine: false maxMemoryMb: 3072
maxInstanceCount: 500\
  lastModified: 1447442734295
15/12/11 16:44:44 INFO metrics.MetricsSaver: Created MetricsSaver
j-2H3BTA60FGUYO:i-f7812947:SparkSubmit:15603 period:60
/mnt/var/em/raw/i-f7812947_20151211_SparkSubmit_15603_raw.bin
15/12/11 16:44:45 INFO metrics.MetricsSaver: 1 aggregated HDFSWriteDelay
1276 raw values into 1 aggregated values, total 1
15/12/11 16:44:45 INFO yarn.Client: Uploading resource
file:/home/hadoop/spark-pipeline-framework-1.1.6-SNAPSHOT/workflow/lib/spark-kafka-services-1.0.jar
-> hdfs://ip-10-247-129-50.ec2.internal:8020/user/hadoo\
p/.sparkStaging/application_1447442727308_0126/spark-kafka-services-1.0.jar
15/12/11 16:44:45 INFO yarn.Client: Uploading resource
file:/home/hadoop/spark-pipeline-framework-1.1.6-SNAPSHOT/conf/AwsCredentials.properties
-> hdfs://ip-10-247-129-50.ec2.internal:8020/user/hadoop/.sparkSta\
ging/application_1447442727308_0126/AwsCredentials.properties
15/12/11 16:44:45 WARN yarn.Client: Resource
file:/home/hadoop/spark-pipeline-framework-1.1.6-SNAPSHOT/conf/AwsCredentials.properties
added multiple times to distributed cache.
15/12/11 16:44:45 INFO yarn.Client: Deleting staging directory
.sparkStaging/application_1447442727308_0126
Exception in thread "main" java.lang.IllegalArgumentException:
requirement failed
     at scala.Predef$.require(Predef.scala:221)
     at
org.apache.spark.deploy.yarn.Client$$anonfun$prepareLocalResources$6$$anonfun$apply$2.apply(Client.scala:392)
     at
org.apache.spark.deploy.yarn.Client$$anonfun$prepareLocalResources$6$$anonfun$apply$2.apply(Client.scala:390)
     at
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
     at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
     at
org.apache.spark.deploy.yarn.Client$$anonfun$prepareLocalResources$6.apply(Client.scala:390)
     at
org.apache.spark.deploy.yarn.Client$$anonfun$prepareLocalResources$6.apply(Client.scala:388)
     at scala.collection.immutable.List.foreach(List.scala:318)
     at
org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:388)
     at
org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:629)
     at
org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:119)
     at org.apache.spark.deploy.yarn.Client.run(Client.scala:907)
     at org.apache.spark.deploy.yarn.Client$.main(Client.scala:966)
     at org.apache.spark.deploy.yarn.Client.main(Client.scala)




---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org


--
Jean-Baptiste Onofré
jbono...@apache.org
http://blog.nanthrax.net
Talend - http://www.talend.com

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to