On Thu, May 7, 2015 at 10:18 AM, Iulian Dragoș iulian.dra...@typesafe.com
wrote:
Got it!
I'll open a Jira ticket and PR when I have a working solution.
Scratch that, I found SPARK-5281
https://issues.apache.org/jira/browse/SPARK-5281..
On Wed, May 6, 2015 at 11:53 PM, Michael Armbrust
Got it!
I'll open a Jira ticket and PR when I have a working solution.
On Wed, May 6, 2015 at 11:53 PM, Michael Armbrust mich...@databricks.com
wrote:
Hi Iulian,
The relevant code is in ScalaReflection
Hi, I just saw this question. I posted my solution to this stack overflow
question.
https://stackoverflow.com/questions/29796928/whats-the-most-efficient-way-to-filter-a-dataframe
Scala reflection can take a classloader when creating a mirror (
universe.runtimeMirror(loader)). I can have a look,
Hi Iulian,
The relevant code is in ScalaReflection
https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/ScalaReflection.scala,
and it would be awesome if you could suggest how to fix this more
generally. Specifically, this code is also broken when
It failed to find the class class org.apache.spark.sql.catalyst.ScalaReflection
in the Spark SQL library. Make sure it's in the classpath and the version
is correct, too.
Dean Wampler, Ph.D.
Author: Programming Scala, 2nd Edition
http://shop.oreilly.com/product/0636920033073.do (O'Reilly)
Hi Everyone,
I am getting following error while registering table using Scala IDE.
Please let me know how to resolve this error. I am using Spark 1.2.1
import sqlContext.createSchemaRDD
val empFile = sc.textFile(/tmp/emp.csv, 4)
.map ( _.split(,) )
This is actually a problem with our use of Scala's reflection library.
Unfortunately you need to load Spark SQL using the primordial classloader,
otherwise you run into this problem. If anyone from the scala side can
hint how we can tell scala.reflect which classloader to use when creating
the