Hi Everyone..
getting below error while running hive java udf from sql context..
org.apache.spark.sql.AnalysisException: No handler for Hive udf class
com.nexr.platform.hive.udf.GenericUDFNVL2 because:
com.nexr.platform.hive.udf.GenericUDFNVL2.; line 1 pos 26
at
No its not required for UDF.
Its required when you convert from rdd to df.
Thanks
Deepak
On 8 Sep 2016 2:25 pm, "Divya Gehlot" <divya.htco...@gmail.com> wrote:
> Hi,
>
> Is it necessary to import sqlContext.implicits._ whenever define and
> call UDF in Spark.
>
>
> Thanks,
> Divya
>
>
>
Hi,
Is it necessary to import sqlContext.implicits._ whenever define and
call UDF in Spark.
Thanks,
Divya
Divya:
https://databricks.com/blog/2015/09/16/spark-1-5-dataframe-api-highlights-datetimestring-handling-time-intervals-and-udafs.html
The link gives a complete example of registering a udAf - user defined
aggregate function. This is a complete example and this example should give you
a
On Thu, Jul 21, 2016 at 5:53 AM, Mich Talebzadeh
wrote:
> something similar
Is this going to be in Scala?
> def ChangeToDate (word : String) : Date = {
> //return
> TO_DATE(FROM_UNIXTIME(UNIX_TIMESTAMP(word,"dd/MM/"),"-MM-dd"))
> val d1 =
On Thu, Jul 21, 2016 at 4:53 AM, Divya Gehlot wrote:
> To be very specific I am looking for UDFs syntax for example which takes
> String as parameter and returns integer .. how do we define the return type
val f: String => Int = ???
val myUDF = udf(f)
or
val myUDF =
On Wed, Jul 20, 2016 at 1:22 PM, Rishabh Bhardwaj wrote:
> val new_df = df.select(from_unixtime($"time").as("newtime"))
or better yet using tick (less typing and more prose than code :))
df.select(from_unixtime('time) as "newtime")
Jacek
block box.
>>
>> Andy
>>
>> From: Rishabh Bhardwaj <rbnex...@gmail.com>
>> Date: Wednesday, July 20, 2016 at 4:22 AM
>> To: Rabin Banerjee <dev.rabin.baner...@gmail.com>
>> Cc: Divya Gehlot <divya.htco...@gmail.com>, "user @spark&
2 AM
> To: Rabin Banerjee <dev.rabin.baner...@gmail.com>
> Cc: Divya Gehlot <divya.htco...@gmail.com>, "user @spark" <
> user@spark.apache.org>
> Subject: Re: write and call UDF in spark dataframe
>
> Hi Divya,
>
> There is already "from_unixtime&qu
Rabin Banerjee <dev.rabin.baner...@gmail.com>
Cc: Divya Gehlot <divya.htco...@gmail.com>, "user @spark"
<user@spark.apache.org>
Subject: Re: write and call UDF in spark dataframe
> Hi Divya,
>
> There is already "from_unixtime" exists in org.apache.s
yep something in line of
val df = sqlContext.sql("SELECT FROM_unixtime(unix_timestamp(), 'dd/MM/
HH:mm:ss.ss') as time ")
Note that this does not require a column from an already existing table.
HTH
Dr Mich Talebzadeh
LinkedIn *
Hi Divya,
There is already "from_unixtime" exists in org.apache.spark.sql.frunctions,
Rabin has used that in the sql query,if you want to use it in dataframe DSL
you can try like this,
val new_df = df.select(from_unixtime($"time").as("newtime"))
Thanks,
Rishabh.
On Wed, Jul 20, 2016 at 4:21
Hi Divya ,
Try,
val df = sqlContext.sql("select from_unixtime(ts,'-MM-dd') as `ts` from mr")
Regards,
Rabin
On Wed, Jul 20, 2016 at 12:44 PM, Divya Gehlot
wrote:
> Hi,
> Could somebody share example of writing and calling udf which converts
> unix tme stamp to
Hi,
Could somebody share example of writing and calling udf which converts unix
tme stamp to date tiime .
Thanks,
Divya
Hi,
How to Use Spark scala custom UDF in spark sql CLI or Beeline client.
with sqlContext we can register a UDF like this:
sqlContext.udf.register("sample_fn", sample_fn _ )
What is the way to use UDF in Spark sql CLI or beeline client.
Thanks
Pooja
I want to define some UDFs in my spark ENV.
And server it in thrift server. So I can use these UDFs in my beeline
connection.
At first I tried start it with udf-jars and create functions in hive.
In spark-sql , I can add temp functions like "CREATE TEMPORARY FUNCTION
bsdUpper AS
o we use it? I know
>>>> how to
>>>> use it in a sql and it works fine
>>>>
>>>> hiveContext.sql(select MyUDF("test") from myTable);
>>>>
>>>> My hiveContext.sql() query invo
groupby(""col1","col2","coln").count();
Can we do the follwing dataframe.select(MyUDF("col1"))??? Please guide.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/How-to-use-registered-Hive-UDF-in-Spark
ot;col1","col2","coln").count();
>
> Can we do the follwing dataframe.select(MyUDF("col1"))??? Please guide.
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/How-to-use-registered-Hive-UDF-in
ose I am trying to convert this query into DataFrame APIs
>>
>>
>> dataframe.select("col1","col2","coln").groupby(""col1","col2","coln").count();
>>
>> C
l() query involves group by on multiple columns so for
>>> scaling purpose I am trying to convert this query into DataFrame APIs
>>>
>>>
>>> dataframe.select("col1","col2","coln").groupby(""col1","col2
Hi I am using UDF in hiveContext.sql("") query inside it uses group by which
forces huge data shuffle read of around 30 GB I am thinking to convert above
query into DataFrame so that I avoid using group by.
How do we use Hive UDF in Spark DataFrame? Please guide. Thanks much.
HI Vinod,
Yes If you want to use a scala or python function you need the block of
code.
Only Hive UDF's are available permanently.
Thanks,
Vishnu
On Wed, Jul 8, 2015 at 5:17 PM, vinod kumar vinodsachin...@gmail.com
wrote:
Thanks Vishnu,
When restart the service the UDF was not accessible
Thanks Vishnu,
When restart the service the UDF was not accessible by my query.I need to
run the mentioned block again to use the UDF.
Is there is any way to maintain UDF in sqlContext permanently?
Thanks,
Vinod
On Wed, Jul 8, 2015 at 7:16 AM, VISHNU SUBRAMANIAN
johnfedrickena...@gmail.com
Thank you for quick response Vishnu,
I have following doubts too.
1.Is there is anyway to upload files to HDFS programattically using c#
language?.
2.Is there is any way to automatically load scala block of code (for UDF)
when i start the spark service?
-Vinod
On Wed, Jul 8, 2015 at 7:57 AM,
Hi,
sqlContext.udf.register(udfname, functionname _)
example:
def square(x:Int):Int = { x * x}
register udf as below
sqlContext.udf.register(square,square _)
Thanks,
Vishnu
On Wed, Jul 8, 2015 at 2:23 PM, vinod kumar vinodsachin...@gmail.com
wrote:
Hi Everyone,
I am new to spark.may I
Hi Everyone,
I am new to spark.may I know how to define and use User Define Function in
SPARK SQL.
I want to use defined UDF by using sql queries.
My Environment
Windows 8
spark 1.3.1
Warm Regards,
Vinod
You are most likely confused because you are using the UDF using
HiveContext. In your case, you are using Spark UDF, not Hive UDF. For a
naive scenario, I can use spark UDFs without any hive installation in my
cluster.
sqlContext.udf.register is for UDF in spark. Hive UDFs are stored in Hive
issue -
http://stackoverflow.com/questions/25059527/udf-not-working-in-spark-sql)
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Running-Hive-UDF-from-spark-shell-fails-due-to-datatype-issue-tp11426.html
Sent from the Apache Spark User List mailing list
29 matches
Mail list logo