OK, if you're sure your binary has Hadoop 2 and/or your classpath has
Hadoop 2, that's not it. I'd look at Sandy's suggestion then.

On Wed, Jul 16, 2014 at 6:11 PM, Andrew Milkowski <[email protected]> wrote:
> thanks Sean! so what I did is in project/SparkBuild.scala  I made it compile
> with 2.3.0-cdh5.0.3  (and I even did sbt clean before sbt/sbt assembly, this
> should have build example client with 2.3.0
>
>
>
> object SparkBuild extends Build {
>   // Hadoop version to build against. For example, "1.0.4" for Apache
> releases, or
>   // "2.0.0-mr1-cdh4.2.0" for Cloudera Hadoop. Note that these variables can
> be set
>   // through the environment variables SPARK_HADOOP_VERSION and SPARK_YARN.
>   //val DEFAULT_HADOOP_VERSION = "1.0.4"
>   val DEFAULT_HADOOP_VERSION = "2.3.0-cdh5.0.3"
>
>   // Whether the Hadoop version to build against is 2.2.x, or a variant of
> it. This can be set
>   // through the SPARK_IS_NEW_HADOOP environment variable.
>   //val DEFAULT_IS_NEW_HADOOP = false
>   val DEFAULT_IS_NEW_HADOOP = true
>
>   //val DEFAULT_YARN = false
>   val DEFAULT_YARN = true
>
>
>
> On Wed, Jul 16, 2014 at 1:02 PM, Sean Owen <[email protected]> wrote:
>>
>> Somewhere in here, you are not actually running vs Hadoop 2 binaries.
>> Your cluster is certainly Hadoop 2, but your client is not using the
>> Hadoop libs you think it is (or your compiled binary is linking
>> against Hadoop 1, which is the default for Spark -- did you change
>> it?)
>>
>> On Wed, Jul 16, 2014 at 5:45 PM, Andrew Milkowski <[email protected]>
>> wrote:
>> > Hello community,
>> >
>> > tried to run storm app on yarn, using cloudera hadoop and spark distro
>> > (from
>> > http://archive.cloudera.com/cdh5/cdh/5)
>> >
>> > hadoop version: hadoop-2.3.0-cdh5.0.3.tar.gz
>> > spark version: spark-0.9.0-cdh5.0.3.tar.gz
>> >
>> > DEFAULT_YARN_APPLICATION_CLASSPATH is part of hadoop-api-yarn jar ...
>> >
>> > thanks for any replies!
>> >
>> > [amilkowski@localhost spark-streaming]$ ./test-yarn.sh
>> > 14/07/16 12:47:17 WARN util.NativeCodeLoader: Unable to load
>> > native-hadoop
>> > library for your platform... using builtin-java classes where applicable
>> > 14/07/16 12:47:17 INFO client.RMProxy: Connecting to ResourceManager at
>> > /0.0.0.0:8032
>> > 14/07/16 12:47:17 INFO yarn.Client: Got Cluster metric info from
>> > ApplicationsManager (ASM), number of NodeManagers: 1
>> > 14/07/16 12:47:17 INFO yarn.Client: Queue info ... queueName:
>> > root.default,
>> > queueCurrentCapacity: 0.0, queueMaxCapacity: -1.0,
>> >       queueApplicationCount = 0, queueChildQueueCount = 0
>> > 14/07/16 12:47:17 INFO yarn.Client: Max mem capabililty of a single
>> > resource
>> > in this cluster 8192
>> > 14/07/16 12:47:17 INFO yarn.Client: Preparing Local resources
>> > 14/07/16 12:47:18 INFO yarn.Client: Uploading
>> >
>> > file:/opt/local/cloudera/spark/cdh5/spark-0.9.0-cdh5.0.3/examples/target/scala-2.10/spark-examples-assembly-0.9.0-cdh5.0.3.jar
>> > to
>> >
>> > hdfs://localhost:8020/user/amilkowski/.sparkStaging/application_1405528355264_0004/spark-examples-assembly-0.9.0-cdh5.0.3.jar
>> > 14/07/16 12:47:19 INFO yarn.Client: Uploading
>> >
>> > file:/opt/local/cloudera/spark/cdh5/spark-0.9.0-cdh5.0.3/assembly/target/scala-2.10/spark-assembly-0.9.0-cdh5.0.3-hadoop2.3.0-cdh5.0.3.jar
>> > to
>> >
>> > hdfs://localhost:8020/user/amilkowski/.sparkStaging/application_1405528355264_0004/spark-assembly-0.9.0-cdh5.0.3-hadoop2.3.0-cdh5.0.3.jar
>> > 14/07/16 12:47:19 INFO yarn.Client: Setting up the launch environment
>> > Exception in thread "main" java.lang.NoSuchFieldException:
>> > DEFAULT_YARN_APPLICATION_CLASSPATH
>> > at java.lang.Class.getField(Class.java:1579)
>> > at
>> >
>> > org.apache.spark.deploy.yarn.ClientBase$.getDefaultYarnApplicationClasspath(ClientBase.scala:403)
>> > at
>> >
>> > org.apache.spark.deploy.yarn.ClientBase$$anonfun$5.apply(ClientBase.scala:386)
>> > at
>> >
>> > org.apache.spark.deploy.yarn.ClientBase$$anonfun$5.apply(ClientBase.scala:386)
>> > at scala.Option.getOrElse(Option.scala:120)
>> > at
>> >
>> > org.apache.spark.deploy.yarn.ClientBase$.populateHadoopClasspath(ClientBase.scala:385)
>> > at
>> >
>> > org.apache.spark.deploy.yarn.ClientBase$.populateClasspath(ClientBase.scala:444)
>> > at
>> >
>> > org.apache.spark.deploy.yarn.ClientBase$class.setupLaunchEnv(ClientBase.scala:274)
>> > at org.apache.spark.deploy.yarn.Client.setupLaunchEnv(Client.scala:41)
>> > at org.apache.spark.deploy.yarn.Client.runApp(Client.scala:77)
>> > at org.apache.spark.deploy.yarn.Client.run(Client.scala:98)
>> > at org.apache.spark.deploy.yarn.Client$.main(Client.scala:183)
>> > at org.apache.spark.deploy.yarn.Client.main(Client.scala)
>> > [amilkowski@localhost spark-streaming]$
>> >
>
>

Reply via email to