Yup, unfortunately this is true due to recent API changes in YARN. We’ll 
probably ship two versions of the YARN package in the next Spark release — 
until then, you’d have to fix this by hand and rebuild.

Matei

On Oct 29, 2013, at 9:31 AM, Guillaume Pitel <[email protected]> wrote:

> Hi,
> 
> I'm trying to compile spark (both 0.8 and master) against CDH-4.4.0 with YARN
> 
> Unfortunately it fails because of an API change introduced between CDH-4.3.0 
> and CDH-4.4.0
> 
> The API has changed since hadoop 2.1.0-beta 
> 
> The AllocateResponse now directly expose a getAllocatedContainers() method 
> http://hadoop.apache.org/docs/r2.2.0/api/org/apache/hadoop/yarn/api/protocolrecords/AllocateResponse.html
> 
> Same thing for a few other methods used later in the code
> 
> So one should just change amResp (for instance one is line 86 in : 
> https://github.com/apache/incubator-spark/blob/master/yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnAllocationHandler.scala#L86
>  )
> 
>     // Keep polling the Resource Manager for containers
>     val amResp = allocateWorkerResources(workersToRequest).getAMResponse
> 
> To
> 
>     val amResp = allocateWorkerResources(workersToRequest)
> 
> Tried with master, it works (succesfully launched a SparkPi job on 4 nodes) 
> 
> Guillaume
> -- 
> <exensa_logo_mail.png>
> Guillaume PITEL, Président 
> +33(0)6 25 48 86 80
> 
> eXenSa S.A.S. 
> 41, rue Périer - 92120 Montrouge - FRANCE 
> Tel +33(0)1 84 16 36 77 / Fax +33(0)9 72 28 37 05

Reply via email to