Can you try writing the UDF directly in spark and register it with spark
sql or hive context ?
Or do you want to reuse the existing UDF jar for hive in spark ?

Thanks
Deepak

On Jan 24, 2017 5:29 PM, "Sirisha Cheruvu" <siri8...@gmail.com> wrote:

> Hi Team,
>
> I am trying to keep below code in get method and calling that get mthod in
> another hive UDF
> and running the hive UDF using Hive Context.sql procedure..
>
>
> switch (f) {
>     case "double" :  return ((DoubleWritable)obj).get();
>             case "bigint" :  return ((LongWritable)obj).get();
>             case "string" :  return ((Text)obj).toString();
>     default  :  return obj;
>   }
> }
>
> Suprisingly only LongWritable and Text convrsions are throwing error but
> DoubleWritable is working
> So I tried changing below code to
>
> switch (f) {
>     case "double" :  return ((DoubleWritable)obj).get();
>             case "bigint" :  return ((DoubleWritable)obj).get();
>             case "string" :  return ((Text)obj).toString();
>     default  :  return obj;
>   }
> }
>
> Still its throws error saying Java.Lang.Long cant be convrted
> to org.apache.hadoop.hive.serde2.io.DoubleWritable
>
>
>
> its working fine on hive but throwing error on spark-sql
>
> I am importing the below packages.
> import java.util.*;
> import org.apache.hadoop.hive.serde2.objectinspector.*;
> import org.apache.hadoop.io.LongWritable;
> import org.apache.hadoop.io.Text;
> import org.apache.hadoop.hive.serde2.io.DoubleWritable;
>
> .Please let me know why it is making issue in spark when perfectly running
> fine on hive
>

Reply via email to