Sandy, perfect! you saved me tons of time! added this in yarn-site.xml job
ran to completion

Can you do me (us) a favor and push newest and patched spark/hadoop to cdh5
(tar's) if possible

and thanks again for this (huge time saver)


On Wed, Jul 16, 2014 at 1:10 PM, Sandy Ryza <sandy.r...@cloudera.com> wrote:

> Andrew,
>
> Are you running on a CM-managed cluster?  I just checked, and there is a
> bug here (fixed in 1.0), but it's avoided by having
> yarn.application.classpath defined in your yarn-site.xml.
>
> -Sandy
>
>
> On Wed, Jul 16, 2014 at 10:02 AM, Sean Owen <so...@cloudera.com> wrote:
>
>> Somewhere in here, you are not actually running vs Hadoop 2 binaries.
>> Your cluster is certainly Hadoop 2, but your client is not using the
>> Hadoop libs you think it is (or your compiled binary is linking
>> against Hadoop 1, which is the default for Spark -- did you change
>> it?)
>>
>> On Wed, Jul 16, 2014 at 5:45 PM, Andrew Milkowski <amgm2...@gmail.com>
>> wrote:
>> > Hello community,
>> >
>> > tried to run storm app on yarn, using cloudera hadoop and spark distro
>> (from
>> > http://archive.cloudera.com/cdh5/cdh/5)
>> >
>> > hadoop version: hadoop-2.3.0-cdh5.0.3.tar.gz
>> > spark version: spark-0.9.0-cdh5.0.3.tar.gz
>> >
>> > DEFAULT_YARN_APPLICATION_CLASSPATH is part of hadoop-api-yarn jar ...
>> >
>> > thanks for any replies!
>> >
>> > [amilkowski@localhost spark-streaming]$ ./test-yarn.sh
>> > 14/07/16 12:47:17 WARN util.NativeCodeLoader: Unable to load
>> native-hadoop
>> > library for your platform... using builtin-java classes where applicable
>> > 14/07/16 12:47:17 INFO client.RMProxy: Connecting to ResourceManager at
>> > /0.0.0.0:8032
>> > 14/07/16 12:47:17 INFO yarn.Client: Got Cluster metric info from
>> > ApplicationsManager (ASM), number of NodeManagers: 1
>> > 14/07/16 12:47:17 INFO yarn.Client: Queue info ... queueName:
>> root.default,
>> > queueCurrentCapacity: 0.0, queueMaxCapacity: -1.0,
>> >       queueApplicationCount = 0, queueChildQueueCount = 0
>> > 14/07/16 12:47:17 INFO yarn.Client: Max mem capabililty of a single
>> resource
>> > in this cluster 8192
>> > 14/07/16 12:47:17 INFO yarn.Client: Preparing Local resources
>> > 14/07/16 12:47:18 INFO yarn.Client: Uploading
>> >
>> file:/opt/local/cloudera/spark/cdh5/spark-0.9.0-cdh5.0.3/examples/target/scala-2.10/spark-examples-assembly-0.9.0-cdh5.0.3.jar
>> > to
>> >
>> hdfs://localhost:8020/user/amilkowski/.sparkStaging/application_1405528355264_0004/spark-examples-assembly-0.9.0-cdh5.0.3.jar
>> > 14/07/16 12:47:19 INFO yarn.Client: Uploading
>> >
>> file:/opt/local/cloudera/spark/cdh5/spark-0.9.0-cdh5.0.3/assembly/target/scala-2.10/spark-assembly-0.9.0-cdh5.0.3-hadoop2.3.0-cdh5.0.3.jar
>> > to
>> >
>> hdfs://localhost:8020/user/amilkowski/.sparkStaging/application_1405528355264_0004/spark-assembly-0.9.0-cdh5.0.3-hadoop2.3.0-cdh5.0.3.jar
>> > 14/07/16 12:47:19 INFO yarn.Client: Setting up the launch environment
>> > Exception in thread "main" java.lang.NoSuchFieldException:
>> > DEFAULT_YARN_APPLICATION_CLASSPATH
>> > at java.lang.Class.getField(Class.java:1579)
>> > at
>> >
>> org.apache.spark.deploy.yarn.ClientBase$.getDefaultYarnApplicationClasspath(ClientBase.scala:403)
>> > at
>> >
>> org.apache.spark.deploy.yarn.ClientBase$$anonfun$5.apply(ClientBase.scala:386)
>> > at
>> >
>> org.apache.spark.deploy.yarn.ClientBase$$anonfun$5.apply(ClientBase.scala:386)
>> > at scala.Option.getOrElse(Option.scala:120)
>> > at
>> >
>> org.apache.spark.deploy.yarn.ClientBase$.populateHadoopClasspath(ClientBase.scala:385)
>> > at
>> >
>> org.apache.spark.deploy.yarn.ClientBase$.populateClasspath(ClientBase.scala:444)
>> > at
>> >
>> org.apache.spark.deploy.yarn.ClientBase$class.setupLaunchEnv(ClientBase.scala:274)
>> > at org.apache.spark.deploy.yarn.Client.setupLaunchEnv(Client.scala:41)
>> > at org.apache.spark.deploy.yarn.Client.runApp(Client.scala:77)
>> > at org.apache.spark.deploy.yarn.Client.run(Client.scala:98)
>> > at org.apache.spark.deploy.yarn.Client$.main(Client.scala:183)
>> > at org.apache.spark.deploy.yarn.Client.main(Client.scala)
>> > [amilkowski@localhost spark-streaming]$
>> >
>>
>
>

Reply via email to