Hi,

I have the following issue,

case class Item (c1: String, c2: String, c3: Option[BigDecimal])
import sparkSession.implicits._
val result = df.as[Item].groupByKey(_.c1).mapGroups((key, value) => { value
})

But I get the following error in compilation time:

Unable to find encoder for type stored in a Dataset.  Primitive types
(Int, String, etc) and Product types (case classes) are supported by
importing spark.implicits._  Support for serializing other types will
be added in future releases.


What am I missing?

Thanks

Reply via email to