Hi Ron,

I just checked and this bug is fixed in recent releases of Spark.

-Sandy


On Sun, Jul 13, 2014 at 8:15 PM, Chester Chen <ches...@alpinenow.com> wrote:

> Ron,
>     Which distribution and Version of Hadoop are you using ?
>
>      I just looked at CDH5 (  hadoop-mapreduce-client-core-
> 2.3.0-cdh5.0.0),
>
> MRJobConfig does have the field :
>
> java.lang.String DEFAULT_MAPREDUCE_APPLICATION_CLASSPATH;
>
> Chester
>
>
>
> On Sun, Jul 13, 2014 at 6:49 PM, Ron Gonzalez <zlgonza...@yahoo.com>
> wrote:
>
>> Hi,
>>   I was doing programmatic submission of Spark yarn jobs and I saw code
>> in ClientBase.getDefaultYarnApplicationClasspath():
>>
>> val field =
>> classOf[MRJobConfig].getField("DEFAULT_YARN_APPLICATION_CLASSPATH)
>> MRJobConfig doesn't have this field so the created launch env is
>> incomplete. Workaround is to set yarn.application.classpath with the value
>> from YarnConfiguration.DEFAULT_YARN_APPLICATION_CLASSPATH.
>>
>> This results in having the spark job hang if the submission config is
>> different from the default config. For example, if my resource manager port
>> is 8050 instead of 8030, then the spark app is not able to register itself
>> and stays in ACCEPTED state.
>>
>> I can easily fix this by changing this to YarnConfiguration instead of
>> MRJobConfig but was wondering what the steps are for submitting a fix.
>>
>> Thanks,
>> Ron
>>
>> Sent from my iPhone
>
>
>

Reply via email to