[ 
https://issues.apache.org/jira/browse/SPARK-2576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14803081#comment-14803081
 ] 

Maximilian Michels commented on SPARK-2576:
-------------------------------------------

User 'jkovacs' has created a pull request for this issue:
https://github.com/apache/flink/pull/1138

> slave node throws NoClassDefFoundError $line11.$read$ when executing a Spark 
> QL query on HDFS CSV file
> ------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-2576
>                 URL: https://issues.apache.org/jira/browse/SPARK-2576
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core, SQL
>    Affects Versions: 1.0.1
>         Environment: One Mesos 0.19 master without zookeeper and 4 mesos 
> slaves. 
> JDK 1.7.51 and Scala 2.10.4 on all nodes. 
> HDFS from CDH5.0.3
> Spark version: I tried both with the pre-built CDH5 spark package available 
> from http://spark.apache.org/downloads.html and by packaging spark with sbt 
> 0.13.2, JDK 1.7.51 and scala 2.10.4 as explained here 
> http://mesosphere.io/learn/run-spark-on-mesos/
> All nodes are running Debian 3.2.51-1 x86_64 GNU/Linux and have 
>            Reporter: Svend Vanderveken
>            Assignee: Prashant Sharma
>            Priority: Blocker
>             Fix For: 1.0.2, 1.1.0
>
>
> Execution of SQL query against HDFS systematically throws a class not found 
> exception on slave nodes when executing .
> (this was originally reported on the user list: 
> http://apache-spark-user-list.1001560.n3.nabble.com/spark1-0-1-spark-sql-error-java-lang-NoClassDefFoundError-Could-not-initialize-class-line11-read-tc10135.html)
> Sample code (ran from spark-shell): 
> {code}
> val sqlContext = new org.apache.spark.sql.SQLContext(sc)
> import sqlContext.createSchemaRDD
> case class Car(timestamp: Long, objectid: String, isGreen: Boolean)
> // I get the same error when pointing to the folder 
> "hdfs://vm28:8020/test/cardata"
> val data = sc.textFile("hdfs://vm28:8020/test/cardata/part-00000")
> val cars = data.map(_.split(",")).map ( ar => Car(ar(0).toLong, ar(1), 
> ar(2).toBoolean))
> cars.registerAsTable("mcars")
> val allgreens = sqlContext.sql("SELECT objectid from mcars where isGreen = 
> true")
> allgreens.collect.take(10).foreach(println)
> {code}
> Stack trace on the slave nodes: 
> {code}
> I0716 13:01:16.215158 13631 exec.cpp:131] Version: 0.19.0
> I0716 13:01:16.219285 13656 exec.cpp:205] Executor registered on slave 
> 20140714-142853-485682442-5050-25487-2
> 14/07/16 13:01:16 INFO MesosExecutorBackend: Registered with Mesos as 
> executor ID 20140714-142853-485682442-5050-25487-2
> 14/07/16 13:01:16 INFO SecurityManager: Changing view acls to: 
> mesos,mnubohadoop
> 14/07/16 13:01:16 INFO SecurityManager: SecurityManager: authentication 
> disabled; ui acls disabled; users with view permissions: Set(mesos, 
> mnubohadoop)
> 14/07/16 13:01:17 INFO Slf4jLogger: Slf4jLogger started
> 14/07/16 13:01:17 INFO Remoting: Starting remoting
> 14/07/16 13:01:17 INFO Remoting: Remoting started; listening on addresses 
> :[akka.tcp://spark@vm23:38230]
> 14/07/16 13:01:17 INFO Remoting: Remoting now listens on addresses: 
> [akka.tcp://spark@vm23:38230]
> 14/07/16 13:01:17 INFO SparkEnv: Connecting to MapOutputTracker: 
> akka.tcp://spark@vm28:41632/user/MapOutputTracker
> 14/07/16 13:01:17 INFO SparkEnv: Connecting to BlockManagerMaster: 
> akka.tcp://spark@vm28:41632/user/BlockManagerMaster
> 14/07/16 13:01:17 INFO DiskBlockManager: Created local directory at 
> /tmp/spark-local-20140716130117-8ea0
> 14/07/16 13:01:17 INFO MemoryStore: MemoryStore started with capacity 294.9 
> MB.
> 14/07/16 13:01:17 INFO ConnectionManager: Bound socket to port 44501 with id 
> = ConnectionManagerId(vm23-hulk-priv.mtl.mnubo.com,44501)
> 14/07/16 13:01:17 INFO BlockManagerMaster: Trying to register BlockManager
> 14/07/16 13:01:17 INFO BlockManagerMaster: Registered BlockManager
> 14/07/16 13:01:17 INFO HttpFileServer: HTTP File server directory is 
> /tmp/spark-ccf6f36c-2541-4a25-8fe4-bb4ba00ee633
> 14/07/16 13:01:17 INFO HttpServer: Starting HTTP Server
> 14/07/16 13:01:18 INFO Executor: Using REPL class URI: http://vm28:33973
> 14/07/16 13:01:18 INFO Executor: Running task ID 2
> 14/07/16 13:01:18 INFO HttpBroadcast: Started reading broadcast variable 0
> 14/07/16 13:01:18 INFO MemoryStore: ensureFreeSpace(125590) called with 
> curMem=0, maxMem=309225062
> 14/07/16 13:01:18 INFO MemoryStore: Block broadcast_0 stored as values to 
> memory (estimated size 122.6 KB, free 294.8 MB)
> 14/07/16 13:01:18 INFO HttpBroadcast: Reading broadcast variable 0 took 
> 0.294602722 s
> 14/07/16 13:01:19 INFO HadoopRDD: Input split: 
> hdfs://vm28:8020/test/cardata/part-00000:23960450+23960451
> I0716 13:01:19.905113 13657 exec.cpp:378] Executor asked to shutdown
> 14/07/16 13:01:20 ERROR Executor: Exception in task ID 2
> java.lang.NoClassDefFoundError: $line11/$read$
>     at $line12.$read$$iwC$$iwC$$iwC$$iwC$$anonfun$2.apply(<console>:19)
>     at $line12.$read$$iwC$$iwC$$iwC$$iwC$$anonfun$2.apply(<console>:19)
>     at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
>     at scala.collection.Iterator$$anon$1.next(Iterator.scala:853)
>     at scala.collection.Iterator$$anon$1.head(Iterator.scala:840)
>     at 
> org.apache.spark.sql.execution.ExistingRdd$$anonfun$productToRowRdd$1.apply(basicOperators.scala:181)
>     at 
> org.apache.spark.sql.execution.ExistingRdd$$anonfun$productToRowRdd$1.apply(basicOperators.scala:176)
>     at org.apache.spark.rdd.RDD$$anonfun$12.apply(RDD.scala:559)
>     at org.apache.spark.rdd.RDD$$anonfun$12.apply(RDD.scala:559)
>     at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
>     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
>     at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
>     at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
>     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
>     at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
>     at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
>     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
>     at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
>     at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
>     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
>     at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
>     at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111)
>     at org.apache.spark.scheduler.Task.run(Task.scala:51)
>     at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:183)
>     at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
>     at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
>     at java.lang.Thread.run(Unknown Source)
> Caused by: java.lang.ClassNotFoundException: $line11.$read$
>     at 
> org.apache.spark.repl.ExecutorClassLoader.findClass(ExecutorClassLoader.scala:65)
>     at java.lang.ClassLoader.loadClass(Unknown Source)
>     at java.lang.ClassLoader.loadClass(Unknown Source)
>     ... 27 more
> Caused by: java.lang.ClassNotFoundException: $line11.$read$
>     at java.lang.ClassLoader.findClass(Unknown Source)
>     at 
> org.apache.spark.util.ParentClassLoader.findClass(ParentClassLoader.scala:26)
>     at java.lang.ClassLoader.loadClass(Unknown Source)
>     at java.lang.ClassLoader.loadClass(Unknown Source)
>     at 
> org.apache.spark.util.ParentClassLoader.loadClass(ParentClassLoader.scala:30)
>     at 
> org.apache.spark.repl.ExecutorClassLoader.findClass(ExecutorClassLoader.scala:60)
>     ... 29 more
> {code}
> Note that running a simple map+reduce job on the same hdfs files with the 
> same installation works fine: 
> {code}
> # this works
> val data = sc.textFile("hdfs://vm28:8020/test/cardata/")
> val lineLengths = data.map(s => s.length)
> val totalLength = lineLengths.reduce((a, b) => a + b)
> {code}
> The hdfs files contain just plain csv files: 
> {code}
> $ hdfs dfs -tail /test/cardata/part-00000
> 14/07/16 13:18:44 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 1396396560000,2ea211cc-ea01-435a-a190-98a6dd5ccd0a,false,Ivory,chrysler,New 
> Caledonia,1970,0.0,0.0,0.0,0.0,38.24645296229051,99.41880649743675,26.619177092584696
> 1396396620000,2ea211cc-ea01-435a-a190-98a6dd5ccd0a,false,Ivory,chrysler,New 
> Caledonia,1970,1.3637951832478066,0.5913309707002152,56.6895043678199,96.54451566032114,100.76632815433682,92.29189473832957,7.009760456230157
> 1396396680000,2ea211cc-ea01-435a-a190-98a6dd5ccd0a,false,Ivory,chrysler,New 
> Caledonia,1970,-3.405565593143888,0.8104753585926928,41.677424397834905,36.57019235002255,8.974008103729105,92.94054149986701,11.673872282136195
> 1396396740000,2ea211cc-ea01-435a-a190-98a6dd5ccd0a,false,Ivory,chrysler,New 
> Caledonia,1970,2.6548062807597854,0.6180832371072019,40.88058181777176,24.47455760837969,37.42027121601756,93.97373842452362,16.48937328407166
> {code}
> spark-env.sh look like this: 
> {code}
> export SPARK_LOCAL_IP=vm28
> export 
> MESOS_NATIVE_LIBRARY=/usr/local/etc/mesos-0.19.0/build/src/.libs/libmesos.so
> export 
> SPARK_EXECUTOR_URI=hdfs://vm28:8020/apps/spark/spark-1.0.1-2.3.0-mr1-cdh5.0.2-hive.tgz
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to