Hive Query Error: Cannot obtain block length

2016-06-28 Thread Arun Patel
I am trying to do log analytics on the logs created by Flume.  Hive queries
are failing with below error.  "hadoop fs -cat" command works on all these
open files. Is there a way to read these open files?   My requirement is to
read the data from open files too.  I am using tez as execution engine.

select b.ts as ts1, a.ts as ts2, a.tid from log_v_rst1 a join log_v_rst2 b
on a.tid = b.tid ;

Error: Error while processing statement: FAILED: Execution Error, return
code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed,
vertexName=Map 2, vertexId=vertex_1467055804180_0003_2_00,
diagnostics=[Task failed, taskId=task_1467055804180_0003_2_00_06,
diagnostics=[TaskAttempt 0 failed, info=[Error: Failure while running
task:java.lang.RuntimeException: java.lang.RuntimeException:
java.io.IOException: java.io.IOException: Cannot obtain block length for
LocatedBlock{BP-854133642-XX.XXX.XX.XX-1460753641159:blk_1073771231_33308;
getBlockSize()=0; corrupt=false; offset=0;
locs=[DatanodeInfoWithStorage[XX.XXX.XX.XXX:1019,DS-3712a177-3199-4b61-bd34-598d04edc6d9,DISK]]}
at
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:171)
at
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:137)
at
org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:344)
at
org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:179)
at
org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:171)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at
org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:171)
at
org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:167)
at
org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)


Re: Hive Query Error

2015-07-09 Thread Ajeet O
Hi Nitin , How to check this,  you mean to check  hive-site.xml. please 
let me know how to check this.
 




From:   Nitin Pawar 
To: "user@hive.apache.org" 
Date:   07/09/2015 07:35 PM
Subject:    Re: Hive Query Error



can u check your config? 
host appears twice 01hw357381.tcsgegdc.com: 01hw357381.tcsgegdc.com
it shd be hostname:port 

also once you correct this, you do a nslookup on the host to make sure its 
identified by the hive client 

On Thu, Jul 9, 2015 at 7:19 PM, Ajeet O  wrote:
Hi All , I have installed Hadoop 2.0 ,  Hive 0.12  on Cent OS 7. 

When I run a query  in Hive -  select count(*)  from u_data ;  it gives 
following errors.   , However I can run  select  * from u_data ;  pls 
help. 

hive> select count(*) from u_data; 
Total MapReduce jobs = 1 
Launching Job 1 out of 1 
Number of reduce tasks determined at compile time: 1 
In order to change the average load for a reducer (in bytes): 
  set hive.exec.reducers.bytes.per.reducer= 
In order to limit the maximum number of reducers: 
  set hive.exec.reducers.max= 
In order to set a constant number of reducers: 
  set mapred.reduce.tasks= 
java.net.UnknownHostException: 01hw357381.tcsgegdc.com: 
01hw357381.tcsgegdc.com: unknown error 
at java.net.InetAddress.getLocalHost(InetAddress.java:1484) 
at 
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:439)
 

at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296) 
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293) 
at java.security.AccessController.doPrivileged(Native Method) 
at javax.security.auth.Subject.doAs(Subject.java:422) 
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
 

at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293) 
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:562) 
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557) 
at java.security.AccessController.doPrivileged(Native Method) 
at javax.security.auth.Subject.doAs(Subject.java:422) 
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
 

at 
org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:557) 
at 
org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:548) 
at 
org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:425) 
at 
org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:144) 
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:151) 
at 
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:65) 

at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1414) 
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1192) 
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1020) 
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:888) 
at 
org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259) 
at 
org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216) 
at 
org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413) 
at 
org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:781) 
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:675) 
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:614) 
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 

at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 

at java.lang.reflect.Method.invoke(Method.java:497) 
at org.apache.hadoop.util.RunJar.run(RunJar.java:221) 
at org.apache.hadoop.util.RunJar.main(RunJar.java:136) 
Caused by: java.net.UnknownHostException: 01hw357381.tcsgegdc.com: unknown 
error 
at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method) 
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:907) 
at 
java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1302) 
at java.net.InetAddress.getLocalHost(InetAddress.java:1479) 
... 34 more 
Job Submission failed with exception 'java.net.UnknownHostException(
01hw357381.tcsgegdc.com: 01hw357381.tcsgegdc.com: unknown error)' 
FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.mr.MapRedTask 

Thanks 
Ajeet

=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have receive

Re: Hive Query Error

2015-07-09 Thread Nitin Pawar
can u check your config?
host appears twice 01hw357381.tcsgegdc.com: 01hw357381.tcsgegdc.com
it shd be hostname:port

also once you correct this, you do a nslookup on the host to make sure its
identified by the hive client

On Thu, Jul 9, 2015 at 7:19 PM, Ajeet O  wrote:

> Hi All , I have installed Hadoop 2.0 ,  Hive 0.12  on Cent OS 7.
>
> When I run a query  in Hive -  select count(*)  from u_data ;  it gives
> following errors.   , However I can run  select  * from u_data ;  pls help.
>
> hive> select count(*) from u_data;
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=
> java.net.UnknownHostException: 01hw357381.tcsgegdc.com:
> 01hw357381.tcsgegdc.com: unknown error
> at java.net.InetAddress.getLocalHost(InetAddress.java:1484)
> at
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:439)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293)
> at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:562)
> at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> at
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:557)
> at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:548)
> at
> org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:425)
> at
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:144)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:151)
> at
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:65)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1414)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1192)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1020)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:888)
> at
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259)
> at
> org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216)
> at
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
> at
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:781)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:675)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:614)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: java.net.UnknownHostException: 01hw357381.tcsgegdc.com:
> unknown error
> at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
> at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:907)
> at
> java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1302)
> at java.net.InetAddress.getLocalHost(InetAddress.java:1479)
> ... 34 more
> Job Submission failed with exception 'java.net.UnknownHostException(
> 01hw357381.tcsgegdc.com: 01hw357381.tcsgegdc.com: unknown error)'
> FAILED: Execution Error, return code 1 from
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask
>
> Thanks
> Ajeet
>
> =-=-=
> Notice: The information contained in this e-mail
> message and/or attachments to it may contain
> confidential or privileged information. If you are
> not the intended recipient, any dissemination, use,
> review, distribution, printing or copying of the
> information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If
> you have received this communication in error,
> please notify us by reply e-mail or telephone and
> immediately and permanently delete the message
> and any attachments. Thank you
>
>


-- 
Nitin

Hive Query Error

2015-07-09 Thread Ajeet O
Hi All , I have installed Hadoop 2.0 ,  Hive 0.12  on Cent OS 7. 

When I run a query  in Hive -  select count(*)  from u_data ;  it gives 
following errors.   , However I can run  select  * from u_data ;  pls 
help.

hive> select count(*) from u_data;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=
In order to set a constant number of reducers:
  set mapred.reduce.tasks=
java.net.UnknownHostException: 01hw357381.tcsgegdc.com: 
01hw357381.tcsgegdc.com: unknown error
at java.net.InetAddress.getLocalHost(InetAddress.java:1484)
at 
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:439)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:562)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at 
org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:557)
at 
org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:548)
at 
org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:425)
at 
org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:144)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:151)
at 
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:65)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1414)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1192)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1020)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:888)
at 
org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259)
at 
org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216)
at 
org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
at 
org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:781)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:675)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:614)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.net.UnknownHostException: 01hw357381.tcsgegdc.com: unknown 
error
at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:907)
at 
java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1302)
at java.net.InetAddress.getLocalHost(InetAddress.java:1479)
... 34 more
Job Submission failed with exception 
'java.net.UnknownHostException(01hw357381.tcsgegdc.com: 
01hw357381.tcsgegdc.com: unknown error)'
FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.mr.MapRedTask

Thanks
Ajeet

=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you




Re: Hive Query Error

2014-02-05 Thread Stephen Sprague
file this one under RTFM.


On Wed, Feb 5, 2014 at 9:11 AM, Nitin Pawar  wrote:

> its create table xyz stored as sequencefile as select blah from table
>
>
> On Wed, Feb 5, 2014 at 10:37 PM, Raj Hadoop  wrote:
>
>> *I am trying to create a Hive sequence file from another table by running
>> the following -*
>>
>> *Your query has the following error(s):*
>> OK FAILED: ParseException line 5:0 cannot recognize input near 'STORED'
>> 'STORED' 'AS' in constant
>> click the *Error Log* tab above for details
>>
>> 1
>>
>> CREATE TABLE temp_xyz as
>>
>> 2
>>
>> SELECT prop1,prop2,prop3,prop4,prop5
>>
>> 3
>>
>> FROM hitdata
>>
>> 4
>>
>> WHERE dateoflog=20130101 and prop1='785-ou'
>>
>> 5
>>
>> STORED AS SEQUENCEFILE;
>>
>> ok failed: parseexception line 5:0 cannot recognize input near 'stored'
>> 'stored' 'as' in constant
>>
>
>
>
> --
> Nitin Pawar
>


Re: Hive Query Error

2014-02-05 Thread Nitin Pawar
its create table xyz stored as sequencefile as select blah from table


On Wed, Feb 5, 2014 at 10:37 PM, Raj Hadoop  wrote:

> *I am trying to create a Hive sequence file from another table by running
> the following -*
>
> *Your query has the following error(s):*
> OK FAILED: ParseException line 5:0 cannot recognize input near 'STORED'
> 'STORED' 'AS' in constant
> click the *Error Log* tab above for details
>
> 1
>
> CREATE TABLE temp_xyz as
>
> 2
>
> SELECT prop1,prop2,prop3,prop4,prop5
>
> 3
>
> FROM hitdata
>
> 4
>
> WHERE dateoflog=20130101 and prop1='785-ou'
>
> 5
>
> STORED AS SEQUENCEFILE;
>
> ok failed: parseexception line 5:0 cannot recognize input near 'stored'
> 'stored' 'as' in constant
>



-- 
Nitin Pawar


Hive Query Error

2014-02-05 Thread Raj Hadoop
I am trying to create a Hive sequence file from another table by running the 
following -

Your query has the following error(s):
OK
FAILED: ParseException line 5:0 cannot recognize input near 'STORED' 'STORED' 
'AS' in constant click the Error Log tab above for details 
1
CREATE TABLE temp_xyz as
2
SELECT prop1,prop2,prop3,prop4,prop5
3
FROM hitdata
4
WHERE dateoflog=20130101 and prop1='785-ou'
5
STORED AS SEQUENCEFILE;
ok
failed: parseexception line 5:0 cannot recognize input near 'stored' 'stored' 
'as' in constant 

Re: hive query error

2013-08-21 Thread 闫昆
thanks Bing I found it


2013/8/22 Bing Li 

> By default, hive.log should exist in /tmp/.
> Also, it could be set in $HIVE_HOME/conf/hive-log4j.properties and
> hive-exec-log4j.properties
> - hive.log.dir
> - hive.log.file
>
>
> 2013/8/22 闫昆 
>
>> hi all
>> when exec hive query throw exception as follow
>> I donnot know where is error log I found $HIVE_HOME/ logs not exist
>>
>> Total MapReduce jobs = 1
>> Launching Job 1 out of 1
>> Number of reduce tasks not specified. Estimated from input data size: 3
>> In order to change the average load for a reducer (in bytes):
>>   set hive.exec.reducers.bytes.per.reducer=
>> In order to limit the maximum number of reducers:
>>   set hive.exec.reducers.max=
>> In order to set a constant number of reducers:
>>   set mapred.reduce.tasks=
>> Cannot run job locally: Input Size (= 2304882371) is larger than
>> hive.exec.mode.local.auto.inputbytes.max (= 134217728)
>> Starting Job = job_1377137178318_0001, Tracking URL =
>> http://hydra0001:8088/proxy/application_1377137178318_0001/
>> Kill Command = /opt/module/hadoop-2.0.0-cdh4.3.0/bin/hadoop job  -kill
>> job_1377137178318_0001
>> Hadoop job information for Stage-1: number of mappers: 18; number of
>> reducers: 3
>> 2013-08-22 10:07:49,654 Stage-1 map = 0%,  reduce = 0%
>> 2013-08-22 10:08:05,544 Stage-1 map = 6%,  reduce = 0%
>> 2013-08-22 10:08:07,289 Stage-1 map = 0%,  reduce = 0%
>> 2013-08-22 10:08:58,217 Stage-1 map = 28%,  reduce = 0%
>> 2013-08-22 10:09:07,210 Stage-1 map = 22%,  reduce = 0%
>> Ended Job = job_1377137178318_0001 with errors
>> Error during job, obtaining debugging information...
>> null
>> FAILED: Execution Error, return code 2 from
>> org.apache.hadoop.hive.ql.exec.MapRedTask
>> MapReduce Jobs Launched:
>> Job 0: Map: 18  Reduce: 3   HDFS Read: 0 HDFS Write: 0 FAIL
>> Total MapReduce CPU Time Spent: 0 msec
>> --
>>
>> In the Hadoop world, I am just a novice, explore the entire Hadoop
>> ecosystem, I hope one day I can contribute their own code
>>
>> YanBit
>> yankunhad...@gmail.com
>>
>>
>


-- 

In the Hadoop world, I am just a novice, explore the entire Hadoop
ecosystem, I hope one day I can contribute their own code

YanBit
yankunhad...@gmail.com


Re: hive query error

2013-08-21 Thread Bing Li
By default, hive.log should exist in /tmp/.
Also, it could be set in $HIVE_HOME/conf/hive-log4j.properties and
hive-exec-log4j.properties
- hive.log.dir
- hive.log.file


2013/8/22 闫昆 

> hi all
> when exec hive query throw exception as follow
> I donnot know where is error log I found $HIVE_HOME/ logs not exist
>
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks not specified. Estimated from input data size: 3
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=
> Cannot run job locally: Input Size (= 2304882371) is larger than
> hive.exec.mode.local.auto.inputbytes.max (= 134217728)
> Starting Job = job_1377137178318_0001, Tracking URL =
> http://hydra0001:8088/proxy/application_1377137178318_0001/
> Kill Command = /opt/module/hadoop-2.0.0-cdh4.3.0/bin/hadoop job  -kill
> job_1377137178318_0001
> Hadoop job information for Stage-1: number of mappers: 18; number of
> reducers: 3
> 2013-08-22 10:07:49,654 Stage-1 map = 0%,  reduce = 0%
> 2013-08-22 10:08:05,544 Stage-1 map = 6%,  reduce = 0%
> 2013-08-22 10:08:07,289 Stage-1 map = 0%,  reduce = 0%
> 2013-08-22 10:08:58,217 Stage-1 map = 28%,  reduce = 0%
> 2013-08-22 10:09:07,210 Stage-1 map = 22%,  reduce = 0%
> Ended Job = job_1377137178318_0001 with errors
> Error during job, obtaining debugging information...
> null
> FAILED: Execution Error, return code 2 from
> org.apache.hadoop.hive.ql.exec.MapRedTask
> MapReduce Jobs Launched:
> Job 0: Map: 18  Reduce: 3   HDFS Read: 0 HDFS Write: 0 FAIL
> Total MapReduce CPU Time Spent: 0 msec
> --
>
> In the Hadoop world, I am just a novice, explore the entire Hadoop
> ecosystem, I hope one day I can contribute their own code
>
> YanBit
> yankunhad...@gmail.com
>
>


hive query error

2013-08-21 Thread 闫昆
hi all
when exec hive query throw exception as follow
I donnot know where is error log I found $HIVE_HOME/ logs not exist

Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 3
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=
In order to set a constant number of reducers:
  set mapred.reduce.tasks=
Cannot run job locally: Input Size (= 2304882371) is larger than
hive.exec.mode.local.auto.inputbytes.max (= 134217728)
Starting Job = job_1377137178318_0001, Tracking URL =
http://hydra0001:8088/proxy/application_1377137178318_0001/
Kill Command = /opt/module/hadoop-2.0.0-cdh4.3.0/bin/hadoop job  -kill
job_1377137178318_0001
Hadoop job information for Stage-1: number of mappers: 18; number of
reducers: 3
2013-08-22 10:07:49,654 Stage-1 map = 0%,  reduce = 0%
2013-08-22 10:08:05,544 Stage-1 map = 6%,  reduce = 0%
2013-08-22 10:08:07,289 Stage-1 map = 0%,  reduce = 0%
2013-08-22 10:08:58,217 Stage-1 map = 28%,  reduce = 0%
2013-08-22 10:09:07,210 Stage-1 map = 22%,  reduce = 0%
Ended Job = job_1377137178318_0001 with errors
Error during job, obtaining debugging information...
null
FAILED: Execution Error, return code 2 from
org.apache.hadoop.hive.ql.exec.MapRedTask
MapReduce Jobs Launched:
Job 0: Map: 18  Reduce: 3   HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec
-- 

In the Hadoop world, I am just a novice, explore the entire Hadoop
ecosystem, I hope one day I can contribute their own code

YanBit
yankunhad...@gmail.com


execute hive query error "Error: GC overhead limit exceeded"

2013-08-21 Thread ch huang
HI,ALL:
i execute a query ,but error,any one know what happened? BTW i use yarn
framework

2013-08-22 09:47:09,893 Stage-1 map = 28%,  reduce = 1%, Cumulative CPU
4140.64 sec
2013-08-22 09:47:10,952 Stage-1 map = 28%,  reduce = 1%, Cumulative CPU
4140.72 sec
2013-08-22 09:47:12,008 Stage-1 map = 28%,  reduce = 1%, Cumulative CPU
4140.8 sec
2013-08-22 09:47:13,066 Stage-1 map = 28%,  reduce = 1%, Cumulative CPU
4147.85 sec
2013-08-22 09:47:14,122 Stage-1 map = 28%,  reduce = 1%, Cumulative CPU
4162.26 sec
2013-08-22 09:47:15,181 Stage-1 map = 28%,  reduce = 1%, Cumulative CPU
4162.33 sec
2013-08-22 09:47:16,238 Stage-1 map = 28%,  reduce = 1%, Cumulative CPU
4162.33 sec
2013-08-22 09:47:17,297 Stage-1 map = 28%,  reduce = 1%, Cumulative CPU
4162.41 sec
2013-08-22 09:47:18,353 Stage-1 map = 28%,  reduce = 1%, Cumulative CPU
4162.5 sec
2013-08-22 09:47:19,410 Stage-1 map = 28%,  reduce = 1%, Cumulative CPU
4162.5 sec
2013-08-22 09:47:20,470 Stage-1 map = 28%,  reduce = 1%, Cumulative CPU
4066.31 sec
MapReduce Total cumulative CPU time: 0 days 1 hours 7 minutes 46 seconds
310 msec
Ended Job = job_1376355583846_0115 with errors
Error during job, obtaining debugging information...
Examining task ID: task_1376355583846_0115_m_88 (and more) from job
job_1376355583846_0115
Examining task ID: task_1376355583846_0115_m_92 (and more) from job
job_1376355583846_0115
Examining task ID: task_1376355583846_0115_m_13 (and more) from job
job_1376355583846_0115
Examining task ID: task_1376355583846_0115_m_000105 (and more) from job
job_1376355583846_0115
Examining task ID: task_1376355583846_0115_m_21 (and more) from job
job_1376355583846_0115
Examining task ID: task_1376355583846_0115_m_000101 (and more) from job
job_1376355583846_0115

Task with the most failures(4):
-
Task ID:
  task_1376355583846_0115_m_18

URL:

http://CH22:8088/taskdetails.jsp?jobid=job_1376355583846_0115&tipid=task_1376355583846_0115_m_18
-
Diagnostic Messages for this Task:
Error: GC overhead limit exceeded
FAILED: Execution Error, return code 2 from
org.apache.hadoop.hive.ql.exec.MapRedTask
MapReduce Jobs Launched:
Job 0: Map: 149  Reduce: 41   Cumulative CPU: 4066.31 sec   HDFS Read:
13504652062 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 days 1 hours 7 minutes 46 seconds 310 msec

loginfo
# less hive-server2.log
2013-08-22 09:30:55,834 WARN  shims.HadoopShimsSecure
(Hadoop23Shims.java:getTaskAttemptLogUrl(45)) - Can't fetch tasklog:
TaskLogServlet is not supported in MR2 mode.
2013-08-22 09:30:55,837 WARN  shims.HadoopShimsSecure
(Hadoop23Shims.java:getTaskAttemptLogUrl(45)) - Can't fetch tasklog:
TaskLogServlet is not supported in MR2 mode.
2013-08-22 09:30:55,838 WARN  shims.HadoopShimsSecure
(Hadoop23Shims.java:getTaskAttemptLogUrl(45)) - Can't fetch tasklog:
TaskLogServlet is not supported in MR2 mode.
2013-08-22 09:30:55,839 WARN  shims.HadoopShimsSecure
(Hadoop23Shims.java:getTaskAttemptLogUrl(45)) - Can't fetch tasklog:
TaskLogServlet is not supported in MR2 mode.
2013-08-22 09:30:55,842 ERROR exec.Task (SessionState.java:printError(421))
- Examining task ID: task_1376355583846_0114_m_09 (and more) from job
job_1376355583846_0114
2013-08-22 09:30:55,843 WARN  shims.HadoopShimsSecure
(Hadoop23Shims.java:getTaskAttemptLogUrl(45)) - Can't fetch tasklog:
TaskLogServlet is not supported in MR2 mode.
2013-08-22 09:30:55,844 WARN  shims.HadoopShimsSecure
(Hadoop23Shims.java:getTaskAttemptLogUrl(45)) - Can't fetch tasklog:
TaskLogServlet is not supported in MR2 mode.
2013-08-22 09:30:55,844 WARN  shims.HadoopShimsSecure
(Hadoop23Shims.java:getTaskAttemptLogUrl(45)) - Can't fetch tasklog:
TaskLogServlet is not supported in MR2 mode.
2013-08-22 09:30:55,845 WARN  shims.HadoopShimsSecure
(Hadoop23Shims.java:getTaskAttemptLogUrl(45)) - Can't fetch tasklog:
TaskLogServlet is not supported in MR2 mode.
2013-08-22 09:30:55,846 WARN  shims.HadoopShimsSecure
(Hadoop23Shims.java:getTaskAttemptLogUrl(45)) - Can't fetch tasklog:
TaskLogServlet is not supported in MR2 mode.
2013-08-22 09:30:55,847 WARN  shims.HadoopShimsSecure
(Hadoop23Shims.java:getTaskAttemptLogUrl(45)) - Can't fetch tasklog:
TaskLogServlet is not supported in MR2 mode.
2013-08-22 09:30:55,848 WARN  shims.HadoopShimsSecure
(Hadoop23Shims.java:getTaskAttemptLogUrl(45)) - Can't fetch tasklog:
TaskLogServlet is not supported in MR2 mode.
2013-08-22 09:30:55,849 WARN  shims.HadoopShimsSecure
(Hadoop23Shims.java:getTaskAttemptLogUrl(45)) - Can't fetch tasklog:
TaskLogServlet is not supported in MR2 mode.
2013-08-22 09:30:55,850 WARN  shims.HadoopShimsSecure
(Hadoop23Shims.java:getTaskAttemptLogUrl(45)) - Can't fetch tasklog:
TaskLogServlet is not supported in MR2 mode.
2013-08-22 09:30:55,851 WARN  shims.HadoopShimsSecure
(Hadoop23Shims.java:getTaskAttemptLogUrl(45)) - Can't fetch tasklog:
TaskLogServlet is not supported in MR2 mode.
2013-08-22 09:30:55