Hi Piotr,
The easiest solution to this for now is to write all your code (including
the case class) inside a Object and the execution part in a method in the
object. Then you can call the method on the spark shell to executed your
code.
Cheers,
Rohit
*Founder & CEO, **Tuplejump, Inc.*
_
Yeah, this is related.
From
https://groups.google.com/forum/#!msg/spark-users/bwAmbUgxWrA/HwP4Nv4adfEJ:
"This is a limitation that will hopefully go away in Scala 2.10 or 2.10 .1,
when we'll use macros to remove the need to do this. (Or more generally if
we get some changes in the Scala interprete
The Spark REPL is slightly modified from the normal Scala REPL to prevent
work from being done twice when closures are deserialized on the workers.
I'm not sure exactly why this causes your problem, but its probably worth
filing a JIRA about it.
Here is another issues with classes defined in the
Hi,
I'm working on Cassandra-Spark integration and I hit a pretty severe
problem. One of the provided functionality is mapping Cassandra rows into
objects of user-defined classes. E.g. like this:
class MyRow(val key: String, val data: Int)
sc.cassandraTable("keyspace", "table").select("key", "dat