Do we have a JIRA issue to track this? I think I've run into a similar
issue.
On Wed, Jul 23, 2014 at 1:12 AM, Yin Huai yh...@databricks.com wrote:
It is caused by a bug in Spark REPL. I still do not know which part of the
REPL code causes it... I think people working REPL may have better
Yes, https://issues.apache.org/jira/browse/SPARK-2576 is used to track it.
On Wed, Jul 23, 2014 at 9:11 AM, Nicholas Chammas
nicholas.cham...@gmail.com wrote:
Do we have a JIRA issue to track this? I think I've run into a similar
issue.
On Wed, Jul 23, 2014 at 1:12 AM, Yin Huai
Hi, Yin Huai
I test again with your snippet code.
It works well in spark-1.0.1
Here is my code:
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
case class Record(data_date: String, mobile: String, create_time: String)
val mobile = Record(2014-07-20,1234567,2014-07-19)
It is caused by a bug in Spark REPL. I still do not know which part of the
REPL code causes it... I think people working REPL may have better idea.
Regarding how I found it, based on exception, it seems we pulled in some
irrelevant stuff and that import was pretty suspicious.
Thanks,
Yin
On
Hi,Kevin
I tried it on spark1.0.0, it works fine.
It's a bug in spark1.0.1 ...
Thanks,
Victor
--
View this message in context:
Hi Victor,
Instead of importing sqlContext.createSchemaRDD, can you explicitly call
sqlContext.createSchemaRDD(rdd) to create a SchemaRDD?
For example,
You have a case class Record.
case class Record(data_date: String, mobile: String, create_time: String)
Then, you create a RDD[Record] and
Hi, Michael
I only modified the default hadoop version to 0.20.2-cdh3u5, and
DEFAULT_HIVE=true in SparkBuild.scala.
Then sbt/sbt assembly.
I just run in the local standalone mode by using sbin/start-all.sh.
Hadoop version is 0.20.2-cdh3u5.
Then use spark-shell to execute the spark
Hi, Victor
I got the same issue and I posted it.
In my case, it only happens when I query some spark-sql on spark 1.0.1 but
for spark 1.0.0, it works properly.
Have you run the same job on spark 1.0.0 ?
Sincerely,
Kevin
--
View this message in context:
Hi,Svend
Your reply is very helpful to me. I'll keep an eye on that ticket.
And also... Cheers :)
Best Regards,
Victor
--
View this message in context:
Can you tell us more about your environment. Specifically, are you also
running on Mesos?
On Jul 18, 2014 12:39 AM, Victor Sheng victorsheng...@gmail.com wrote:
when I run a query to a hadoop file.
mobile.registerAsTable(mobile)
val count = sqlContext.sql(select count(1) from mobile)
res5:
when I run a query to a hadoop file.
mobile.registerAsTable(mobile)
val count = sqlContext.sql(select count(1) from mobile)
res5: org.apache.spark.sql.SchemaRDD =
SchemaRDD[21] at RDD at SchemaRDD.scala:100
== Query Plan ==
ExistingRdd [data_date#0,mobile#1,create_time#2], MapPartitionsRDD[4] at
11 matches
Mail list logo