[ 
https://issues.apache.org/jira/browse/SPARK-1719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14074021#comment-14074021
 ] 

Xuri Nagarin commented on SPARK-1719:
-------------------------------------

Is this related? When I run spark-shell with "--master yarn 
--driver-library-path /opt/cloudera/parcels/GPLEXTRAS/lib/hadoop/lib/native/", 
then I can load a lzo file as:

val textFile = sc.textFile("/some/lzo/file.lzo")
textFile.first()
<returns a string from the file>

textFile.count()
org.apache.spark.SparkException: Job aborted due to stage failure: Task 3.0:0 
failed 4 times, most recent failure: Exception failure in TID 7 on host 
node1-9-ops.abc.net: java.lang.RuntimeException: native-lzo library not 
available
        
com.hadoop.compression.lzo.LzopCodec.getDecompressorType(LzopCodec.java:96)
        
org.apache.hadoop.io.compress.CodecPool.getDecompressor(CodecPool.java:176)
        
org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:110)
        
org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
        org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:193)
        org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:184)
        org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:93)
        org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
        org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
        org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
        org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
        org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
        org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111)
        org.apache.spark.scheduler.Task.run(Task.scala:51)
        org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:187)
        
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
        at 
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1033)
        at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1017)
        at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1015)
        at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
        at 
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1015)
        at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:633)
        at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:633)
        at scala.Option.foreach(Option.scala:236)
        at 
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:633)
        at 
org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1207)
        at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
        at akka.actor.ActorCell.invoke(ActorCell.scala:456)
        at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
        at akka.dispatch.Mailbox.run(Mailbox.scala:219)
        at 
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
        at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
        at 
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
        at 
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
        at 
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)


> spark.executor.extraLibraryPath isn't applied on yarn
> -----------------------------------------------------
>
>                 Key: SPARK-1719
>                 URL: https://issues.apache.org/jira/browse/SPARK-1719
>             Project: Spark
>          Issue Type: Sub-task
>          Components: YARN
>    Affects Versions: 1.0.0
>            Reporter: Thomas Graves
>            Assignee: Guoqiang Li
>             Fix For: 1.1.0
>
>
> Looking through the code for spark on yarn I don't see that 
> spark.executor.extraLibraryPath is being properly applied when it launches 
> executors.  It is using the spark.driver.libraryPath in the ClientBase.
> Note I didn't actually test it so its possible I missed something.
> I also think better to use LD_LIBRARY_PATH rather then -Djava.library.path.  
> once  java.library.path is set, it doesn't search LD_LIBRARY_PATH.  In Hadoop 
> we switched to use LD_LIBRARY_PATH instead of java.library.path.  See 
> https://issues.apache.org/jira/browse/MAPREDUCE-4072.  I'll split this into 
> separate jira.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to