On Tue, Dec 24, 2013 at 7:29 AM, Ameet Kini wrote:
>
> If Java serialization is the only one that properly works for closures,
> then I shouldn't be setting "spark.closure.serializer" to
> "org.apache.spark.serializer.KryoSerializer",
>
My understanding is that it's not that it kryo wouldn't nec
In scala case classes are serializable by default, your TileIdWritable
should be a case class. I usually enable Kryo ser for objects and keep
default ser for closures, this works pretty well.
Eugen
2013/12/24 Ameet Kini
>
> If Java serialization is the only one that properly works for closures
If Java serialization is the only one that properly works for closures,
then I shouldn't be setting "spark.closure.serializer" to
"org.apache.spark.serializer.KryoSerializer", and my only hope for getting
lookup (and other such methods that still use closure serializers) to work
is to either a) use
Hi Michael,
I re-ran this on another machine which is on spark's master branch
0.9.0-SNAPSHOT from Dec 14 (right after the scala 2.10 branch was merged
back into master) and recreated the NPE towards the end of this message. I
can't tell looking at the relevant code what may have caused the except
The problem really is that in certain cases task results -- and
front-end-passed parameters -- are passed thru closures. For closures, only
java serializer is properly supported (afaik).
there has been a limited number of fixes for data parameter communication
between front end and backend for usi
What spark version are you using? By looking at the code Executor.scala
line195, you will at least know what cause the NPE.
We can start from there.
On Dec 23, 2013, at 10:21 AM, Ameet Kini wrote:
> Thanks Imran.
>
> I tried setting "spark.closure.serializer" to
> "org.apache.spark.seriali
Using Java serialization would make the NPE go away, but it would be a less
preferable solution. My application is network-intensive, and serialization
cost is significant. In other words, these objects are ideal candidates for
Kryo.
On Mon, Dec 23, 2013 at 3:41 PM, Jie Deng wrote:
> maybe t
maybe try to implement your class with serializable...
2013/12/23 Ameet Kini
> Thanks Imran.
>
> I tried setting "spark.closure.serializer" to
> "org.apache.spark.serializer.KryoSerializer" and now end up seeing
> NullPointerException when the executor starts up. This is a snippet of the
> exec
Thanks Imran.
I tried setting "spark.closure.serializer" to
"org.apache.spark.serializer.KryoSerializer" and now end up seeing
NullPointerException when the executor starts up. This is a snippet of the
executor's log. Notice how "registered TileIdWritable" and "registered
ArgWritable" is called, s
there is a separate setting for serializing closures
"spark.closure.serializer" (listed here
http://spark.incubator.apache.org/docs/latest/configuration.html)
that is used to serialize whatever is used by all the fucntions on an RDD,
eg., map, filter, and lookup. Those closures include referenced
I'm getting the below NotSerializableException despite using Kryo to
serialize that class (TileIdWritable).
The offending line: awtestRdd.lookup(TileIdWritable(200))
Initially I thought Kryo is not being registered properly, so I tried
running operations over awtestRDD which force a shuffle (e.g.
11 matches
Mail list logo