Can you show the stack trace for encoding error(s) ?

Have you looked at the following test which involves NestedArray of
primitive type ?

./sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/encoders/ExpressionEncoderSuite.scala

Cheers

On Mon, Jun 27, 2016 at 8:50 AM, Daniel Imberman <daniel.imber...@gmail.com>
wrote:

> Hi all,
>
> So I've been attempting to reformat a project I'm working on to use the
> Dataset API and have been having some issues with encoding errors. From
> what I've read, I think that I should be able to store Arrays of primitive
> values in a dataset. However, the following class gives me encoding errors:
>
> case class InvertedIndex(partition:Int, docs:Array[Int],
> indices:Array[Long], weights:Array[Double])
>
> val inv = RDD[InvertedIndex]
> val invertedIndexDataset = sqlContext.createDataset(inv)
> invertedIndexDataset.groupBy(x => x.partition).mapGroups {
>     //...
> }
>
> Could someone please help me understand what the issue is here? Can
> Datasets not currently handle Arrays of primitives, or is there something
> extra that I need to do to make them work?
>
> Thank you
>
>

Reply via email to