Manu,

If you have written UDF in Java for Hive then you need to copy your JAR on your 
Hadoop cluster in /usr/lib/hive/lib/ folder to hive to use this JAR.

Thank You,
Manish

From: Manu A [mailto:hadoophi...@gmail.com]
Sent: Tuesday, September 25, 2012 3:44 PM
To: user@hive.apache.org
Subject: Custom MR scripts using java in Hive

Hi All,
I am learning hive. Please let me know if any one tried with custom Map Reduce 
scripts using java in hive or refer me some links and blogs with an example.

when i tried i got the below error :

Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2012-09-25 02:47:23,720 Stage-1 map = 0%,  reduce = 0%
2012-09-25 02:47:56,943 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_201209222231_0001 with errors
Error during job, obtaining debugging information...
Examining task ID: task_201209222231_0001_m_000002 (and more) from job 
job_201209222231_0001
Exception in thread "Thread-51" java.lang.RuntimeException: Error while reading 
from task log url
        at 
org.apache.hadoop.hive.ql.exec.errors.TaskLogProcessor.getErrors(TaskLogProcessor.java:130)
        at 
org.apache.hadoop.hive.ql.exec.JobDebugger.showJobFailDebugInfo(JobDebugger.java:211)
        at org.apache.hadoop.hive.ql.exec.JobDebugger.run(JobDebugger.java:81)
        at java.lang.Thread.run(Thread.java:619)
Caused by: java.io.IOException: Server returned HTTP response code: 400 for 
URL: // removed as confidential
        at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1313)
        at java.net.URL.openStream(URL.java:1010)
        at 
org.apache.hadoop.hive.ql.exec.errors.TaskLogProcessor.getErrors(TaskLogProcessor.java:120)
        ... 3 more
FAILED: Execution Error, return code 2 from 
org.apache.hadoop.hive.ql.exec.MapRedTask
MapReduce Jobs Launched:
Job 0: Map: 1   HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec




Thanks for ur help in advance :)



Thanks & Regards,
Manu






Reply via email to