It is caused by a bug in Spark REPL. I still do not know which part of the
REPL code causes it... I think people working REPL may have better idea.

Regarding how I found it, based on exception, it seems we pulled in some
irrelevant stuff and that import was pretty suspicious.

Thanks,

Yin


On Tue, Jul 22, 2014 at 12:53 AM, Victor Sheng <victorsheng...@gmail.com>
wrote:

> Hi, Yin Huai
>     I test again with your snippet code.
>     It works well in spark-1.0.1
>
>     Here is my code:
>
>  val sqlContext = new org.apache.spark.sql.SQLContext(sc)
>  case class Record(data_date: String, mobile: String, create_time: String)
>  val mobile = Record("2014-07-20","1234567","2014-07-19")
>  val lm = List(mobile)
>  val mobileRDD = sc.makeRDD(lm)
>  val mobileSchemaRDD = sqlContext.createSchemaRDD(mobileRDD)
>  mobileSchemaRDD.registerAsTable("mobile")
>  sqlContext.sql("select count(1) from mobile").collect()
>
> The Result is like below:
> 14/07/22 15:49:53 INFO spark.SparkContext: Job finished: collect at
> SparkPlan.scala:52, took 0.296864832 s
> res9: Array[org.apache.spark.sql.Row] = Array([1])
>
>
>    But what is the main cause of this exception? And how you find it out by
> looking some unknown characters like $line11.$read$
> $line12.$read$$iwC$$iwC$$iwC$$iwC$$anonfun$ ?
>
> Thanks,
> Victor
>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/spark1-0-1-spark-sql-error-java-lang-NoClassDefFoundError-Could-not-initialize-class-line11-read-tp10135p10390.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>

Reply via email to