[ 
https://issues.apache.org/jira/browse/HIVE-8722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14251557#comment-14251557
 ] 

Rui Li commented on HIVE-8722:
------------------------------

I got this exception which also seems related:
{noformat}
2014-12-18 12:25:18,399 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) - 14/12/18 12:25:18 DEBUG rdd.HadoopRDD: 
SplitLocationInfo and other new Hadoop classes are unavailable. Using the older 
Hadoop location info code.
2014-12-18 12:25:18,399 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) - java.lang.ClassNotFoundException: 
org.apache.hadoop.mapred.InputSplitWithLocationInfo
2014-12-18 12:25:18,399 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -        at 
java.net.URLClassLoader$1.run(URLClassLoader.java:366)
2014-12-18 12:25:18,399 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -        at 
java.net.URLClassLoader$1.run(URLClassLoader.java:355)
2014-12-18 12:25:18,399 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -        at 
java.security.AccessController.doPrivileged(Native Method)
2014-12-18 12:25:18,399 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -        at 
java.net.URLClassLoader.findClass(URLClassLoader.java:354)
2014-12-18 12:25:18,399 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -        at 
java.lang.ClassLoader.loadClass(ClassLoader.java:425)
2014-12-18 12:25:18,399 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -        at 
sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
2014-12-18 12:25:18,399 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -        at 
java.lang.ClassLoader.loadClass(ClassLoader.java:358)
2014-12-18 12:25:18,399 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -        at java.lang.Class.forName0(Native 
Method)
2014-12-18 12:25:18,399 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -        at 
java.lang.Class.forName(Class.java:190)
2014-12-18 12:25:18,400 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -        at 
org.apache.spark.rdd.HadoopRDD$SplitInfoReflections.<init>(HadoopRDD.scala:381)
2014-12-18 12:25:18,400 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -        at 
org.apache.spark.rdd.HadoopRDD$.liftedTree1$1(HadoopRDD.scala:391)
2014-12-18 12:25:18,400 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -        at 
org.apache.spark.rdd.HadoopRDD$.<init>(HadoopRDD.scala:390)
2014-12-18 12:25:18,400 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -        at 
org.apache.spark.rdd.HadoopRDD$.<clinit>(HadoopRDD.scala)
2014-12-18 12:25:18,400 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -        at 
org.apache.spark.rdd.HadoopRDD.getInputFormat(HadoopRDD.scala:179)
2014-12-18 12:25:18,400 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -        at 
org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:197)
2014-12-18 12:25:18,400 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -        at 
org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:206)
2014-12-18 12:25:18,400 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -        at 
org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
2014-12-18 12:25:18,400 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -        at 
scala.Option.getOrElse(Option.scala:120)
2014-12-18 12:25:18,400 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -        at 
org.apache.spark.rdd.RDD.partitions(RDD.scala:204)
2014-12-18 12:25:18,400 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -        at 
org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)
2014-12-18 12:25:18,400 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -        at 
org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:206)
2014-12-18 12:25:18,400 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -        at 
org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
2014-12-18 12:25:18,400 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -        at 
scala.Option.getOrElse(Option.scala:120)
2014-12-18 12:25:18,400 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -        at 
org.apache.spark.rdd.RDD.partitions(RDD.scala:204)
2014-12-18 12:25:18,400 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -        at 
org.apache.spark.ShuffleDependency.<init>(Dependency.scala:79)
2014-12-18 12:25:18,400 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -        at 
org.apache.spark.rdd.ShuffledRDD.getDependencies(ShuffledRDD.scala:80)
2014-12-18 12:25:18,400 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -        at 
org.apache.spark.rdd.RDD$$anonfun$dependencies$2.apply(RDD.scala:193)
2014-12-18 12:25:18,401 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -        at 
org.apache.spark.rdd.RDD$$anonfun$dependencies$2.apply(RDD.scala:191)
2014-12-18 12:25:18,401 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -        at 
scala.Option.getOrElse(Option.scala:120)
2014-12-18 12:25:18,401 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -        at 
org.apache.spark.rdd.RDD.dependencies(RDD.scala:191)
2014-12-18 12:25:18,401 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -        at 
org.apache.spark.scheduler.DAGScheduler.visit$1(DAGScheduler.scala:301)
2014-12-18 12:25:18,401 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -        at 
org.apache.spark.scheduler.DAGScheduler.getParentStages(DAGScheduler.scala:313)
2014-12-18 12:25:18,401 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -        at 
org.apache.spark.scheduler.DAGScheduler.newStage(DAGScheduler.scala:247)
2014-12-18 12:25:18,401 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -        at 
org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:734)
2014-12-18 12:25:18,401 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -        at 
org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1389)
2014-12-18 12:25:18,401 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -        at 
akka.actor.Actor$class.aroundReceive(Actor.scala:465)
2014-12-18 12:25:18,401 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -        at 
org.apache.spark.scheduler.DAGSchedulerEventProcessActor.aroundReceive(DAGScheduler.scala:1375)
2014-12-18 12:25:18,401 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -        at 
akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
2014-12-18 12:25:18,401 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -        at 
akka.actor.ActorCell.invoke(ActorCell.scala:487)
2014-12-18 12:25:18,401 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -        at 
akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
2014-12-18 12:25:18,401 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -        at 
akka.dispatch.Mailbox.run(Mailbox.scala:220)
2014-12-18 12:25:18,401 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -        at 
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
2014-12-18 12:25:18,401 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -        at 
scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
2014-12-18 12:25:18,401 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -        at 
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
2014-12-18 12:25:18,401 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -        at 
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
2014-12-18 12:25:18,402 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -        at 
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
{noformat}

> Enhance InputSplitShims to extend InputSplitWithLocationInfo [Spark Branch]
> ---------------------------------------------------------------------------
>
>                 Key: HIVE-8722
>                 URL: https://issues.apache.org/jira/browse/HIVE-8722
>             Project: Hive
>          Issue Type: Improvement
>            Reporter: Jimmy Xiang
>
> We got thie following exception in hive.log:
> {noformat}
> 2014-11-03 11:45:49,865 DEBUG rdd.HadoopRDD
> (Logging.scala:logDebug(84)) - Failed to use InputSplitWithLocations.
> java.lang.ClassCastException: Cannot cast
> org.apache.hadoop.hive.ql.io.CombineHiveInputFormat$CombineHiveInputSplit
> to org.apache.hadoop.mapred.InputSplitWithLocationInfo
>         at java.lang.Class.cast(Class.java:3094)
>         at 
> org.apache.spark.rdd.HadoopRDD.getPreferredLocations(HadoopRDD.scala:278)
>         at 
> org.apache.spark.rdd.RDD$$anonfun$preferredLocations$2.apply(RDD.scala:216)
>         at 
> org.apache.spark.rdd.RDD$$anonfun$preferredLocations$2.apply(RDD.scala:216)
>         at scala.Option.getOrElse(Option.scala:120)
>         at org.apache.spark.rdd.RDD.preferredLocations(RDD.scala:215)
>         at 
> org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal(DAGScheduler.scala:1303)
>         at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal$2$$anonfun$apply$2.apply$mcVI$sp(DAGScheduler.scala:1313)
>         at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal$2$$anonfun$apply$2.apply(DAGScheduler.scala:1312)
> {noformat}
> My understanding is that the split location info helps Spark to execute tasks 
> more efficiently. This could help other execution engine too. So we should 
> consider to enhance InputSplitShim to implement InputSplitWithLocationInfo if 
> possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to