10 11:24 AM
To: mailto:hive-user@hadoop.apache.org>>
Subject: Re: Regression in trunk? (RE: Insert overwrite error using hive trunk)
Prodeep, can you open the tracking URL printed out from the log and click
through to the task log? The real error should be printed over there. The link
may
pasted what I found in
/tmp//hive.log and that wasn't very indicative either.
Pradeep
From: Ning Zhang [mailto:nzh...@facebook.com]
Sent: Tuesday, September 28, 2010 11:24 AM
To:
Subject: Re: Regression in trunk? (RE: Insert overwrite error using hive trunk)
Pr
Prodeep, can you open the tracking URL printed out from the log and click
through to the task log? The real error should be printed over there. The link
may be expired so you need to rerun the query and click on the new one.
I'm suspecting the error is due to the fact that CombineHiveInputFormat
Should I open a jira for this? So far it seems like a regression.
Pradeep
From: Pradeep Kamath [mailto:prade...@yahoo-inc.com]
Sent: Tuesday, September 28, 2010 9:32 AM
To: hive-user@hadoop.apache.org
Subject: Re: Regression in trunk? (RE: Insert overwrite error
With "hive -hiveconf hive.root.logger=DEBUG,DRFA -e ... "
/tmp//hive.log seems to have pretty detailed log messages
including debug msgs. I don't see the "initialization failed" message
and the stack trace mentioned in HADOOP-5759 - is there any other place
I need to check. On the UI I only see
Pradeep, you might be hitting HADOOP-5759 and the job is not getting
initialized at all. Look in JobTracker logs for the jobid to confirm the same.
On 9/28/10 6:28 AM, "Pradeep Kamath" wrote:
Here is some relevant stuff from /tmp/pradeepk/hive.logs - can't make much out
of it:
2010-09-27 17:4
Here is some relevant stuff from /tmp/pradeepk/hive.logs - can't make
much out of it:
2010-09-27 17:40:01,081 INFO exec.MapRedTask
(SessionState.java:printInfo(268)) - Starting Job =
job_201009251752_1341, Tracking URL =
http://:50030/jobdetails.jsp?jobid=job_201009251752_1341
2010-09-27 17:
>From the error info, it seems the 2nd job has been launched and failed. So I'm
>assuming there are map tasks started? If not, you can find the error message
>in the client log file /tmp//hive.log at the machine running hive
>after setting the hive.root.logger property Steven mentioned.
On Sep
Try "hive -hiveconf hive.root.logger=DEBUG,DRFA -e ..." to get more context of
the error.
From: Pradeep Kamath [mailto:prade...@yahoo-inc.com]
Sent: Monday, September 27, 2010 12:34 PM
To: hive-user@hadoop.apache.org
Subject: RE: Regression in trunk? (RE: Insert overwrite error using
This clearly indicate the merge still happens due to the conditional task. Can
you double check if the parameter is set (hive.merge.mapfiles).
Also if you can also revert it back to use the old map-reduce merging (rather
than using CombineHiveInputFormat for map-only merging) by setting
hive.m
Here is the output of explain:
STAGE DEPENDENCIES:
Stage-1 is a root stage
Stage-4 depends on stages: Stage-1 , consists of Stage-3, Stage-2
Stage-3
Stage-0 depends on stages: Stage-3, Stage-2
Stage-2
STAGE PLANS:
Stage: Stage-1
Map Reduce
Alias -> Map Operator Tree:
numbers
There is one ticket for insert overwrite local directory:
https://issues.apache.org/jira/browse/HIVE-1582
On Mon, Sep 27, 2010 at 9:31 AM, Ning Zhang wrote:
> Can you do explain your query after setting the parameter?
>
>
> On Sep 27, 2010, at 9:25 AM, Ashutosh Chauhan wrote:
>
>> I suspected the
Can you do explain your query after setting the parameter?
On Sep 27, 2010, at 9:25 AM, Ashutosh Chauhan wrote:
> I suspected the same. But, even after setting this property, second MR
> job did get launched and then failed.
>
> Ashutosh
> On Mon, Sep 27, 2010 at 09:25, Ning Zhang wrote:
>> I
I suspected the same. But, even after setting this property, second MR
job did get launched and then failed.
Ashutosh
On Mon, Sep 27, 2010 at 09:25, Ning Zhang wrote:
> I'm guessing this is due to the merge task (the 2nd MR job that merges small
> files together). You can try to 'set hive.merge.m
I'm guessing this is due to the merge task (the 2nd MR job that merges small
files together). You can try to 'set hive.merge.mapfiles=false;' before the
query and see if it succeeded.
If it is due to merge job, can you attach the plan and check the mapper/reducer
task log and see what errors/ex
Hi,
Any help in debugging the issue I am seeing below will be greatly
appreciated. Unless I am doing something wrong, this seems to be a regression
in trunk.
Thanks,
Pradeep
From: Pradeep Kamath [mailto:prade...@yahoo-inc.com]
Sent: Friday, September 24, 2010
16 matches
Mail list logo