I tried opening the below URL, and nothing got opened, I got page cannot be
displayed. Why is that so?



*Raihan Jamal*



On Fri, Jul 20, 2012 at 12:39 PM, Sriram Krishnan <skrish...@netflix.com>wrote:

>  What version of Hadoop and Hive are you using? We have seen errors like
> this in the past – and you can actually replace taskid with attemptid to
> fetch your logs.
>
>  So try this:
> http://lvsaishdc3dn0857.lvs.ebay.com:50060/tasklog?attemptid=attempt_201207172005_14407_r_000000_1&all=true
>
>
>  But yes, that is not the reason the job failed – you actually have to
> look at the task logs to figure it out.
>
>  Sriram
>
>   From: "kulkarni.swar...@gmail.com" <kulkarni.swar...@gmail.com>
> Reply-To: <user@hive.apache.org>
> Date: Fri, 20 Jul 2012 14:28:48 -0500
> To: <user@hive.apache.org>
> Subject: Re: Error while reading from task log url
>
>  First of all, this exception is not what is causing your job to fail. When
> a job fails Hive attempts to automatically retrieve the task logs from the
> JobTracker's TaskLogServlet. This indicates something wrong with your
> hadoop setup, JobTracker down, maybe?
>
>  You can suppress this exception by doing:
>
>  hive> SET hive.exec.show.job.failure.debug.info=false;
>
>  Look into your task logs to see why your job actually failed.
>
> On Fri, Jul 20, 2012 at 2:12 PM, Raihan Jamal <jamalrai...@gmail.com>wrote:
>
>>  Whenever I run the below query-
>> *
>> *
>> *SELECT buyer_id, item_id, ranknew(buyer_id, item_id), created_time*
>> *FROM (*
>> *    SELECT buyer_id, item_id, created_time*
>> *    FROM testingtable1*
>> *    DISTRIBUTE BY buyer_id, item_id*
>> *    SORT BY buyer_id, item_id, created_time desc*
>> *) a*
>> *WHERE ranknew(buyer_id, item_id) %  2 == 0;*
>>
>>
>>  I always get the below error, I have no clue what does this error
>> means? Is there any problem with my query or something wrong with the
>> system?
>>
>>  *Total MapReduce jobs = 1*
>> *Launching Job 1 out of 1*
>> *Number of reduce tasks not specified. Estimated from input data size: 1*
>> *In order to change the average load for a reducer (in bytes):*
>> *  set hive.exec.reducers.bytes.per.reducer=<number>*
>> *In order to limit the maximum number of reducers:*
>> *  set hive.exec.reducers.max=<number>*
>> *In order to set a constant number of reducers:*
>> *  set mapred.reduce.tasks=<number>*
>> *Starting Job = job_201207172005_14407, Tracking URL =
>> http://ares-jt.vip.ebay.com:50030/jobdetails.jsp?jobid=job_201207172005_14407
>> *
>> *Kill Command = /home/hadoop/latest/bin/../bin/hadoop job
>>  -Dmapred.job.tracker=ares-jt:8021 -kill job_201207172005_14407*
>> *2012-07-21 02:07:15,917 Stage-1 map = 0%,  reduce = 0%*
>> *2012-07-21 02:07:27,211 Stage-1 map = 100%,  reduce = 0%*
>> *2012-07-21 02:07:38,700 Stage-1 map = 100%,  reduce = 33%*
>> *2012-07-21 02:07:48,517 Stage-1 map = 100%,  reduce = 0%*
>> *2012-07-21 02:08:49,566 Stage-1 map = 100%,  reduce = 0%*
>> *2012-07-21 02:09:08,640 Stage-1 map = 100%,  reduce = 100%*
>> *Ended Job = job_201207172005_14407 with errors*
>> *java.lang.RuntimeException: Error while reading from task log url*
>> *        at
>> org.apache.hadoop.hive.ql.exec.errors.TaskLogProcessor.getErrors(TaskLogProcessor.java:130)
>> *
>> *        at
>> org.apache.hadoop.hive.ql.exec.ExecDriver.showJobFailDebugInfo(ExecDriver.java:931)
>> *
>> *        at
>> org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:716)*
>> *        at
>> org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:107)*
>> *        at
>> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:55)
>> *
>> *        at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:621)*
>> *        at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:495)*
>> *        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:374)*
>> *        at
>> org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:138)*
>> *        at
>> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:197)*
>> *        at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:302)
>> *
>> *        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)*
>> *        at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> *
>> *        at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> *
>> *        at java.lang.reflect.Method.invoke(Method.java:597)*
>> *        at org.apache.hadoop.util.RunJar.main(RunJar.java:156)*
>> *Caused by: java.io.IOException: Server returned HTTP response code: 400
>> for URL:
>> http://lvsaishdc3dn0857.lvs.ebay.com:50060/tasklog?taskid=attempt_201207172005_14407_r_000000_1&all=true
>> *
>> *        at
>> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1313)
>> *
>> *        at java.net.URL.openStream(URL.java:1010)*
>> *        at
>> org.apache.hadoop.hive.ql.exec.errors.TaskLogProcessor.getErrors(TaskLogProcessor.java:120)
>> *
>> *        ... 15 more*
>> *Ended Job = job_201207172005_14407 with exception
>> 'java.lang.RuntimeException(Error while reading from task log url)'*
>>
>>
>>
>>
>> *Raihan Jamal*
>>
>>
>
>
>  --
> Swarnim
>

Reply via email to