A situation changes a bit, and the workaround is to add `K` restriction (K
should be a subtype of Product);
Thought I have right now another error:
org.apache.spark.sql.AnalysisException: cannot resolve '(`key` = `key`)' due
to data type mismatch: differing types in '(`key` = `key`)'
Hi!
I work with a new Spark 2 datasets api. PR:
https://github.com/geotrellis/geotrellis/pull/1675
The idea is to use Datasets[(K, V)] and for example to join by Key of type
K.
The first problems was that there are no Encoders for custom types (not
products), so the workaround was to use Kryo:
So the situation is following: got a spray server, with a spark context
available (fair scheduling in a cluster mode, via spark-submit). There are
some http urls, which calling spark rdd, and collecting information from
accumulo / hdfs / etc (using rdd). Noticed, that there is a sort of