Hi,
Covariance always seems like a good idea at first but you must be really
careful as it always has unexpected consequences...
From my experience, covariance often becomes a pain when dealing with
serialization/deserialization (I've experienced a few cases while
developing play-json
From my experience, covariance often becomes a pain when dealing with
serialization/deserialization (I've experienced a few cases while
developing play-json datomisca).
Moreover, if you have implicits, variance often becomes a headache...
This is exactly the kind of feedback I was hoping
i believe kryo serialization uses runtime class, not declared class
we have no issues serializing covariant scala lists
On Sat, Mar 22, 2014 at 11:59 AM, Pascal Voitot Dev
pascal.voitot@gmail.com wrote:
On Sat, Mar 22, 2014 at 3:45 PM, Michael Armbrust mich...@databricks.com
wrote:
Dear,
I'm pretty much following the Pascal's advices, since I've myseelf
encoutered some problems with implicits (when playing the same kind of game
with my Neo4J Scala API).
Nevertheless, one remark regarding the serialization, the lost of data
shouldn't arrive in the case whenimplicit
On Sat, Mar 22, 2014 at 8:59 AM, Pascal Voitot Dev
pascal.voitot@gmail.com wrote:
The problem I was talking about is when you try to use typeclass converters
and make them contravariant/covariant for input/output. Something like:
Reader[-I, +O] { def read(i:I): O }
Doing this, you soon
On Sat, Mar 22, 2014 at 8:38 PM, David Hall d...@cs.berkeley.edu wrote:
On Sat, Mar 22, 2014 at 8:59 AM, Pascal Voitot Dev
pascal.voitot@gmail.com wrote:
The problem I was talking about is when you try to use typeclass
converters
and make them contravariant/covariant for
Hi Pascal,
Thanks for the input. I think we are going to be okay here since, as Koert
said, the current serializers use runtime type information. We could also
keep at ClassTag around for the original type when the RDD was created.
Good things to be aware of though.
Michael
On Sat, Mar 22,
Any plans of integrating SPARK-818 into spark trunk ? The pull request is
open.
It offers spark as a service with spark jobserver running as a separate
process.
Thanks,
Suhas.
Thanks guys.
On Thu, Mar 20, 2014 at 10:39 PM, Patrick Wendell pwend...@gmail.comwrote:
It has a bunch of packages installed on it for various spark
dependencies (libfortran, numpy, scipy) and some helpful tools (dstat,
iotop).
On Thu, Mar 20, 2014 at 10:21 AM, Reynold Xin