Hi,

We have several udf's written in Scala that we use within jobs submitted into 
Spark. They work perfectly with the sqlContext after being registered. We also 
allow access to saved tables via the Hive Thrift server bundled with Spark. 
However, we would like to allow Hive connections to use the udf's in their 
queries against the saved tables. Is there a way to register udf's such that 
they can be used within both a Spark job and in a Hive connection?

Thanks!
Dave

Sent from my iPad
---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to