I have a case class named "Computed", and I'd like to be able to encode
all the Row objects in the DataFrame like this:

def myEncoder (df: DataFrame): Dataset[Computed] =
  df.as(Encoders.bean(classOf[Computed]))

This works just fine with the latest version of spark, but I'm forced
to use version 1.5.1, which has neither Dataset nor Encoders.

The alternative seems to be iterating every Row, doing a .get() or
.getAs() on that data, combined with a set on the associated attribute
in Computed.

Is there any other way?

Reply via email to