Here is the offending line: val some_rdd = pair_rdd.groupByKey().flatMap { case (mk: MyKey, md_iter: Iterable[MyData]) => { ...
[error] ******** .scala:249: overloaded method value groupByKey with alternatives: [error] [K](func: org.apache.spark.api.java.function.MapFunction[(aaa.MyKey, aaa.MyData),K], encoder: org.apache.spark.sql.Encoder[K])org.apache.spark.sql.KeyValueGroupedDataset[K,(aaa.MyKey, aaa.MyData)] <and> [error] [K](func: ((aaa.MyKey, aaa.MyData)) => K)(implicit evidence$4: org.apache.spark.sql.Encoder[K])org.apache.spark.sql.KeyValueGroupedDataset[K,(aaa.MyKey, aaa.MyData)] [error] cannot be applied to () [error] val some_rdd = pair_rdd.groupByKey().flatMap { case (mk: MyKey, hd_iter: Iterable[MyData]) => { [error] ^ [warn] ************.scala:249: non-variable type argument aaa.MyData in type pattern Iterable[aaa.MyData] (the underlying of Iterable[aaa.MyData]) is unchecked since it is eliminated by erasure [warn] val some_rdd = pair_rdd.groupByKey().flatMap { case (mk: MyKey, hd_iter: Iterable[MyData]) => { [warn] ^ [warn] one warning found I can't see any obvious API change... what is the problem? Thanks, Arun