Re: handling null argument in custom udf

2012-12-06 Thread Søren

Right. Thanks for all the help.
It turned out that it did help to check for null in the code. No mystery.
I did try that earlier but the attempt got lost somehow.

Thanks for the advise on using GenericUDF.

cheers
Søren

On 05/12/2012 11:10, Vivek Mishra wrote:

The way UDF works is, you need to tell your ObjectInspector about your 
primitive or JavaTypes. So in your case even if value is null, you should be 
able to assign it as a String or any other object. Then invocation to 
evaluate() function should know about type of java object.

-Vivek

From: Vivek Mishra
Sent: 05 December 2012 15:36
To: user@hive.apache.org
Subject: RE: handling null argument in custom udf

Could you please look into and share your task log/attemptlog for complete 
error trace or actual error behind this?

-Vivek

From: Søren [s...@syntonetic.com]
Sent: 04 December 2012 20:28
To: user@hive.apache.org
Subject: Re: handling null argument in custom udf

Thanks. Did you mean I should handle null in my udf or my serde?

I did try to check for null inside the code in my udf, but it fails even before 
it gets called.

This is from when the udf fails:

Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to execute 
method public org.apache.hadoop.io.Text 
com.company.hive.myfun.evaluate(java.lang.Object,java.lang.Object)
on objectcom.company.hive.myfun@1412332 of class com.company.hive.myfun with 
arguments {0:java.lang.Object, null} of size 2

It looks like there is a null, or is this error message misleading?


On 04/12/2012 15:43, Edward Capriolo wrote:
There is no null argument. You should handle the null case in your code.

If (arga == null)

Or optionally you could use a generic udf but a regular one should handle what 
you are doing.

On Tuesday, December 4, 2012, Søren 
mailto:s...@syntonetic.com>> wrote:

Hi Hive community

I have a custom udf, say myfun, written in Java which I utilize like this

select myfun(col_a, col_b) from mytable where etc

col_b is a string type and sometimes it is null.

When that happens, my query crashes with
---
java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: 
Hive Runtime Error while processing row
{"col_a":"val","col_b":null}
...
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to execute 
method public org.apache.hadoop.io.Text
---

public final class myfun extends UDF {
 public Text evaluate(final Text argA, final Text argB) {

I'm unsure how this should be fixed in a proper way. Is the framework looking 
for an overload of evaluate that would comply with the null argument?

I need to say that the table is declared using my own json serde reading from 
S3. I'm not processing nulls in my serde in any special way because Hive seems 
to handle null in the right way when not passed to my own UDF.

Are there anyone out there with ideas or experiences on this issue?

thanks in advance
Søren











NOTE: This message may contain information that is confidential, proprietary, 
privileged or otherwise protected by law. The message is intended solely for 
the named addressee. If received in error, please destroy and notify the 
sender. Any use of this email is prohibited when received in error. Impetus 
does not represent, warrant and/or guarantee, that the integrity of this 
communication has been maintained nor that the communication is free of errors, 
virus, interception or interference.








NOTE: This message may contain information that is confidential, proprietary, 
privileged or otherwise protected by law. The message is intended solely for 
the named addressee. If received in error, please destroy and notify the 
sender. Any use of this email is prohibited when received in error. Impetus 
does not represent, warrant and/or guarantee, that the integrity of this 
communication has been maintained nor that the communication is free of errors, 
virus, interception or interference.




Re: handling null argument in custom udf

2012-12-04 Thread Søren

Thanks. Did you mean I should handle null in my udf or my serde?

I did try to check for null inside the code in my udf, but it fails even 
before it gets called.


This is from when the udf fails:

Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to 
execute method public org.apache.hadoop.io.Text 
com.company.hive.myfun.evaluate(java.lang.Object,java.lang.Object)
on objectcom.company.hive.myfun@1412332 of class 
com.company.hive.myfun with arguments {0:java.lang.Object, null} of size 2


It looks like there is a null, or is this error message misleading?


On 04/12/2012 15:43, Edward Capriolo wrote:

There is no null argument. You should handle the null case in your code.

If (arga == null)

Or optionally you could use a generic udf but a regular one should 
handle what you are doing.


On Tuesday, December 4, 2012, Søren <mailto:s...@syntonetic.com>> wrote:

> Hi Hive community
>
> I have a custom udf, say myfun, written in Java which I utilize like 
this

>
> select myfun(col_a, col_b) from mytable where etc
>
> col_b is a string type and sometimes it is null.
>
> When that happens, my query crashes with
> ---
> java.lang.RuntimeException: 
org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error 
while processing row

> {"col_a":"val","col_b":null}
> ...
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unable 
to execute method public org.apache.hadoop.io.Text

> ---
>
> public final class myfun extends UDF {
> public Text evaluate(final Text argA, final Text argB) {
>
> I'm unsure how this should be fixed in a proper way. Is the 
framework looking for an overload of evaluate that would comply with 
the null argument?

>
> I need to say that the table is declared using my own json serde 
reading from S3. I'm not processing nulls in my serde in any special 
way because Hive seems to handle null in the right way when not passed 
to my own UDF.

>
> Are there anyone out there with ideas or experiences on this issue?
>
> thanks in advance
> Søren
>
> 




handling null argument in custom udf

2012-12-04 Thread Søren

Hi Hive community

I have a custom udf, say myfun, written in Java which I utilize like this

select myfun(col_a, col_b) from mytable where etc

col_b is a string type and sometimes it is null.

When that happens, my query crashes with
---
java.lang.RuntimeException: 
org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error 
while processing row

{"col_a":"val","col_b":null}
...
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to 
execute method public org.apache.hadoop.io.Text

---

public final class myfun extends UDF {
public Text evaluate(final Text argA, final Text argB) {

I'm unsure how this shouldbe fixed in a proper way.Is the framework 
looking for an overload of evaluate that would comply with the null 
argument?


I need to say that the table is declared using my own json serde reading 
from S3. I'm not processing nulls in my serde in any special way because 
Hive seems to handle null in the right way when not passed to my own UDF.


Are there anyone out there with ideas or experiences on this issue?

thanks in advance
Søren



external table on flume log files in S3

2012-04-24 Thread Søren

Hi Hive community

We are collecting huge amounts of data into Amazon S3 using Flume.

In Elastic Mapreduce, we have so far managed to create an external Hive 
table on JSON formatted gzipped log files in S3 using a customized 
serde. The log files are collected  and stored in one single folder with 
file names following this pattern:

usr-20120423-012725137+.2392780833002846.0029.gz
usr-20120423-012928765+.2392904461259123.0029.gz
usr-20120423-013032368+.2392968063991639.0029.gz

There are thousands to millions of these files. Is there a way to make 
HIVE benefit from the datetime stamp in the filenames? For example to 
make  queries on smaller subsets. Or filtering when creating the 
external table.


If using the INPUT__FILE__NAME, the job gets done but there is no 
significant performance gain. I guess, due the the evaluation order of 
the SQL statement. I.e. processing the entire repository takes the same 
time as only one day's logs. Same large number of total open-file jobs.


SELECT *
FROM mytable
WHERE INPUT__FILE__NAME LIKE 's3://myflume-logs/usr-20120423%';

Best practise knowledge from others who have been down this road is very 
welcomed.


thanks in advance
Soren