Just moving it out of test is not enough. Must move the case class definition 
to the top level. Otherwise it would report a runtime error of  task not 
serializable when executing collect().


From: Du Li <l...@yahoo-inc.com.INVALID<mailto:l...@yahoo-inc.com.INVALID>>
Date: Thursday, September 11, 2014 at 12:33 PM
To: "user@spark.apache.org<mailto:user@spark.apache.org>" 
<user@spark.apache.org<mailto:user@spark.apache.org>>
Subject: Re: SparkSQL HiveContext TypeTag compile error

Solved it.

The problem occurred because the case class was defined within a test case in 
FunSuite. Moving the case class definition out of test fixed the problem.


From: Du Li <l...@yahoo-inc.com.INVALID<mailto:l...@yahoo-inc.com.INVALID>>
Date: Thursday, September 11, 2014 at 11:25 AM
To: "user@spark.apache.org<mailto:user@spark.apache.org>" 
<user@spark.apache.org<mailto:user@spark.apache.org>>
Subject: SparkSQL HiveContext TypeTag compile error

Hi,

I have the following code snippet. It works fine on spark-shell but in a 
standalone app it reports "No TypeTag available for MySchemaā€¯ at compile time 
when calling hc.createScheamaRdd(rdd). Anybody knows what might be missing?

Thanks,
Du

------
Import org.apache.spark.sql.hive.HiveContext

case class MySchema(key: Int, value: String)
val rdd = sc.parallelize((1 to 10).map(i => MySchema(i, s"val$i")))
val schemaRDD = hc.createSchemaRDD(rdd)
schemaRDD.registerTempTable("data")
val rows = hc.sql("select * from data")
rows.collect.foreach(println)

Reply via email to