Re: How to use registered Hive UDF in Spark DataFrame?

2015-10-04 Thread Umesh Kacha
o we use it? I know >>>> how to >>>> use it in a sql and it works fine >>>> >>>> hiveContext.sql(select MyUDF("test") from myTable); >>>> >>>> My hiveContext.sql() query invo

How to use registered Hive UDF in Spark DataFrame?

2015-10-02 Thread unk1102
groupby(""col1","col2","coln").count(); Can we do the follwing dataframe.select(MyUDF("col1"))??? Please guide. -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/How-to-use-registered-Hive-UDF-in-Spark

Re: How to use registered Hive UDF in Spark DataFrame?

2015-10-02 Thread Michael Armbrust
ot;col1","col2","coln").count(); > > Can we do the follwing dataframe.select(MyUDF("col1"))??? Please guide. > > > > -- > View this message in context: > http://apache-spark-user-list.1001560.n3.nabble.com/How-to-use-registered-Hive-UDF-in

Re: How to use registered Hive UDF in Spark DataFrame?

2015-10-02 Thread Umesh Kacha
ose I am trying to convert this query into DataFrame APIs >> >> >> dataframe.select("col1","col2","coln").groupby(""col1","col2","coln").count(); >> >> C

Re: How to use registered Hive UDF in Spark DataFrame?

2015-10-02 Thread Michael Armbrust
l() query involves group by on multiple columns so for >>> scaling purpose I am trying to convert this query into DataFrame APIs >>> >>> >>> dataframe.select("col1","col2","coln").groupby(""col1","col2