Hi, Literal cannot handle Tuple2. Anyway, how about this?
val rdd = sc.makeRDD(1 to 3).map(i => (i, 0)) map(rdd.collect.flatMap(x => x._1 :: x._2 :: Nil).map(lit _): _*) // maropu On Tue, Nov 15, 2016 at 9:33 AM, Nirav Patel <npa...@xactlycorp.com> wrote: > I am trying to use following API from Functions to convert a map into > column so I can pass it to UDF. > > map(cols: Column > <http://spark.apache.org/docs/latest/api/scala/org/apache/spark/sql/Column.html> > *): Column > <http://spark.apache.org/docs/latest/api/scala/org/apache/spark/sql/Column.html> > > "Creates a new map column. The input columns must be grouped as key-value > pairs, e.g. (key1, value1, key2, value2, ...). The key columns must all > have the same data type, and can't be null. The value columns must all have > the same data type." > > > final val idxMap = idxMapRdd.collectAsMap > val colmap = map(idxMapA.map(lit _): _*) > > But getting following error: > > <console>:139: error: type mismatch; > found : Iterable[org.apache.spark.sql.Column] > required: Seq[org.apache.spark.sql.Column] > val colmap = map(idxMapArr.map(lit _): _*) > > > If I try: > val colmap = map(idxMapArr.map(lit _).toSeq: _*) > > It says: > > java.lang.RuntimeException: Unsupported literal type class scala.Tuple2 > (17.0,MBO) > at org.apache.spark.sql.catalyst.expressions.Literal$.apply( > literals.scala:57) > at org.apache.spark.sql.functions$.lit(functions.scala:101) > at $anonfun$1.apply(<console>:153) > > > > What is the correct usage of a `map` api to convert hashmap into column? > > > > > > > > [image: What's New with Xactly] <http://www.xactlycorp.com/email-click/> > > <https://www.nyse.com/quote/XNYS:XTLY> [image: LinkedIn] > <https://www.linkedin.com/company/xactly-corporation> [image: Twitter] > <https://twitter.com/Xactly> [image: Facebook] > <https://www.facebook.com/XactlyCorp> [image: YouTube] > <http://www.youtube.com/xactlycorporation> -- --- Takeshi Yamamuro