Re: hive -e run tez query error

2015-06-22 Thread r7raul1...@163.com
Sorry , I made a mistake,
use hivecli to execute same query on tez  ,throw the same exception.


r7raul1...@163.com
 
From: r7raul1...@163.com
Date: 2015-06-23 13:53
To: user
Subject: hive -e run tez query error
When I use hive 1.1.0 on tez 0.53 in hadoop 2.3.0:

hive -v -e "set hive.execution.engine=tez;set mapred.job.queue.name=bi_etl;drop 
table TESTTMP.a_start;create table TESTTMP.a_start(id bigint);insert overwrite 
table TESTTMP.a_start select id from tandem.p_city;drop table 
TESTTMP.a_end;create table TESTTMP.a_end(id bigint);insert overwrite table 
TESTTMP.a_end select id from TESTTMP.a_start;" 

Logging initialized using configuration in 
jar:file:/usr/local/src/apache-hive/lib/hive-common-1.1.0.jar!/hive-log4j.properties
 
SLF4J: Class path contains multiple SLF4J bindings. 
SLF4J: Found binding in 
[jar:file:/usr/local/src/apache-hive/lib/hive-jdbc-1.1.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 
SLF4J: Found binding in 
[jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
explanation. 
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] 
set hive.execution.engine=tez 
set mapred.job.queue.name=bi_etl 
drop table TESTTMP.a_start 
OK 
Time taken: 1.702 seconds 
create table TESTTMP.a_start(id bigint) 
OK 
Time taken: 0.311 seconds 
insert overwrite table TESTTMP.a_start select id from tandem.p_city 
Query ID = lujian_20150623134848_31ba3183-74b8-4c3a-96ae-5c8b650b99df 
Total jobs = 1 
Launching Job 1 out of 1 


Status: Running (Executing on YARN cluster with App id 
application_1433219182593_252390) 


 
VERTICES STATUS TOTAL COMPLETED RUNNING PENDING FAILED KILLED 

 
Map 1 .. SUCCEEDED 1 1 0 0 0 0 

 
VERTICES: 01/01 [==>>] 100% ELAPSED TIME: 13.55 s 

 
Loading data to table testtmp.a_start 
Moved: 
'hdfs://yhd-jqhadoop2.int.yihaodian.com:8020/user/hive/warehouse/testtmp.db/a_start/00_0'
 to trash at: 
hdfs://yhd-jqhadoop2.int.yihaodian.com:8020/user/lujian/.Trash/Current 
Table testtmp.a_start stats: [numFiles=1, numRows=531, totalSize=2154, 
rawDataSize=1623] 
OK 
Time taken: 25.586 seconds 
drop table TESTTMP.a_end 
OK 
Time taken: 0.232 seconds 
create table TESTTMP.a_end(id bigint) 
OK 
Time taken: 0.068 seconds 
insert overwrite table TESTTMP.a_end select id from TESTTMP.a_start 
Query ID = lujian_20150623134949_bff735c9-6abc-47e7-a9f7-f7e2be7e43e9 
Total jobs = 1 
Launching Job 1 out of 1 


Status: Running (Executing on YARN cluster with App id 
application_1433219182593_252390) 


 
VERTICES STATUS TOTAL COMPLETED RUNNING PENDING FAILED KILLED 

 
Map 1 FAILED 1 0 0 1 4 0 

 
VERTICES: 00/01 [>>--] 0% ELAPSED TIME: 15.46 s 

 
Status: Failed 
Vertex failed, vertexName=Map 1, vertexId=vertex_1433219182593_252390_2_00, 
diagnostics=[Task failed, taskId=task_1433219182593_252390_2_00_00, 
diagnostics=[TaskAttempt 0 failed, info=[Container 
container_1433219182593_252390_01_03 finished with diagnostics set to 
[Container failed. File does not exist: 
hdfs://yhd-jqhadoop2.int.yihaodian.com:8020/user/lujian/lujian/_tez_session_dir/63de23a2-1cff-4434-96ad-1304089fb489/.tez/application_1433219182593_252390/tez-conf.pb
 
]], TaskAttempt 1 failed, info=[Container 
container_1433219182593_252390_01_04 finished with diagnostics set to 
[Container failed. File does not exist: 
hdfs://yhd-jqhadoop2.int.yihaodian.com:8020/user/lujian/lujian/_tez_session_dir/63de23a2-1cff-4434-96ad-1304089fb489/.tez/application_1433219182593_252390/tez-conf.pb
 
]], TaskAttempt 2 failed, info=[Container 
container_1433219182593_252390_01_05 finished with diagnostics set to 
[Container failed. File does not exist: 
hdfs://yhd-jqhadoop2.int.yihaodian.com:8020/user/lujian/lujian/_tez_session_dir/63de23a2-1cff-4434-96ad-1304089fb489/.tez/application_1433219182593_252390/tez-conf.pb
 
]], TaskAttempt 3 failed, info=[Container 
container_1433219182593_252390_01_06 finished with diagnostics set to 
[Container failed. File does not exist: 
hdfs://yhd-jqhadoop2.int.yihaodian.com:8020/user/lujian/lujian/_tez_session_dir/63de23a2-1cff-4434-96ad-1304089fb489/.tez/application_1433219182593_252390/tez-conf.pb
 
]]], Vertex failed as one or more tasks failed. failedTasks:1, Ve

hive -e run tez query error

2015-06-22 Thread r7raul1...@163.com
When I use hive 1.1.0 on tez 0.53 in hadoop 2.3.0:

hive -v -e "set hive.execution.engine=tez;set mapred.job.queue.name=bi_etl;drop 
table TESTTMP.a_start;create table TESTTMP.a_start(id bigint);insert overwrite 
table TESTTMP.a_start select id from tandem.p_city;drop table 
TESTTMP.a_end;create table TESTTMP.a_end(id bigint);insert overwrite table 
TESTTMP.a_end select id from TESTTMP.a_start;" 

Logging initialized using configuration in 
jar:file:/usr/local/src/apache-hive/lib/hive-common-1.1.0.jar!/hive-log4j.properties
 
SLF4J: Class path contains multiple SLF4J bindings. 
SLF4J: Found binding in 
[jar:file:/usr/local/src/apache-hive/lib/hive-jdbc-1.1.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 
SLF4J: Found binding in 
[jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
explanation. 
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] 
set hive.execution.engine=tez 
set mapred.job.queue.name=bi_etl 
drop table TESTTMP.a_start 
OK 
Time taken: 1.702 seconds 
create table TESTTMP.a_start(id bigint) 
OK 
Time taken: 0.311 seconds 
insert overwrite table TESTTMP.a_start select id from tandem.p_city 
Query ID = lujian_20150623134848_31ba3183-74b8-4c3a-96ae-5c8b650b99df 
Total jobs = 1 
Launching Job 1 out of 1 


Status: Running (Executing on YARN cluster with App id 
application_1433219182593_252390) 


 
VERTICES STATUS TOTAL COMPLETED RUNNING PENDING FAILED KILLED 

 
Map 1 .. SUCCEEDED 1 1 0 0 0 0 

 
VERTICES: 01/01 [==>>] 100% ELAPSED TIME: 13.55 s 

 
Loading data to table testtmp.a_start 
Moved: 
'hdfs://yhd-jqhadoop2.int.yihaodian.com:8020/user/hive/warehouse/testtmp.db/a_start/00_0'
 to trash at: 
hdfs://yhd-jqhadoop2.int.yihaodian.com:8020/user/lujian/.Trash/Current 
Table testtmp.a_start stats: [numFiles=1, numRows=531, totalSize=2154, 
rawDataSize=1623] 
OK 
Time taken: 25.586 seconds 
drop table TESTTMP.a_end 
OK 
Time taken: 0.232 seconds 
create table TESTTMP.a_end(id bigint) 
OK 
Time taken: 0.068 seconds 
insert overwrite table TESTTMP.a_end select id from TESTTMP.a_start 
Query ID = lujian_20150623134949_bff735c9-6abc-47e7-a9f7-f7e2be7e43e9 
Total jobs = 1 
Launching Job 1 out of 1 


Status: Running (Executing on YARN cluster with App id 
application_1433219182593_252390) 


 
VERTICES STATUS TOTAL COMPLETED RUNNING PENDING FAILED KILLED 

 
Map 1 FAILED 1 0 0 1 4 0 

 
VERTICES: 00/01 [>>--] 0% ELAPSED TIME: 15.46 s 

 
Status: Failed 
Vertex failed, vertexName=Map 1, vertexId=vertex_1433219182593_252390_2_00, 
diagnostics=[Task failed, taskId=task_1433219182593_252390_2_00_00, 
diagnostics=[TaskAttempt 0 failed, info=[Container 
container_1433219182593_252390_01_03 finished with diagnostics set to 
[Container failed. File does not exist: 
hdfs://yhd-jqhadoop2.int.yihaodian.com:8020/user/lujian/lujian/_tez_session_dir/63de23a2-1cff-4434-96ad-1304089fb489/.tez/application_1433219182593_252390/tez-conf.pb
 
]], TaskAttempt 1 failed, info=[Container 
container_1433219182593_252390_01_04 finished with diagnostics set to 
[Container failed. File does not exist: 
hdfs://yhd-jqhadoop2.int.yihaodian.com:8020/user/lujian/lujian/_tez_session_dir/63de23a2-1cff-4434-96ad-1304089fb489/.tez/application_1433219182593_252390/tez-conf.pb
 
]], TaskAttempt 2 failed, info=[Container 
container_1433219182593_252390_01_05 finished with diagnostics set to 
[Container failed. File does not exist: 
hdfs://yhd-jqhadoop2.int.yihaodian.com:8020/user/lujian/lujian/_tez_session_dir/63de23a2-1cff-4434-96ad-1304089fb489/.tez/application_1433219182593_252390/tez-conf.pb
 
]], TaskAttempt 3 failed, info=[Container 
container_1433219182593_252390_01_06 finished with diagnostics set to 
[Container failed. File does not exist: 
hdfs://yhd-jqhadoop2.int.yihaodian.com:8020/user/lujian/lujian/_tez_session_dir/63de23a2-1cff-4434-96ad-1304089fb489/.tez/application_1433219182593_252390/tez-conf.pb
 
]]], Vertex failed as one or more tasks failed. failedTasks:1, Vertex 
vertex_1433219182593_252390_2_00 [Map 1] killed/failed due to:null] 
DAG failed due to vertex failure. failedVertices:1 killedVertices:0 
FAILED: Execution Error, return code 2 from 
org.apache.hadoop.hive.

Re: HBase and Hive integration

2015-06-22 Thread Buntu Dev
Thanks Sanjiv. I've updated the Hive config setting the
hbase.zookeeper.quorum to point to the appropriate zookeeper.

On Tue, Jun 23, 2015 at 10:53 AM, Buntu Dev  wrote:

> Thanks Sanjiv.
>
> On 6/23/15, @Sanjiv Singh  wrote:
> > Hi Buntu,
> >
> >
> > Hive config to provide zookeeper quorum for the HBase cluster
> >
> >
> > --hiveconf hbase.zookeeper.quorum=##
> >
> >
> > Regards
> > Sanjiv Singh
> > Mob :  +091 9990-447-339
> >
> > On Fri, Jun 12, 2015 at 10:04 PM, Buntu Dev  wrote:
> >
> >> Thanks Nick for the write up. It was quite helpful for a newbie like me.
> >>
> >> Is there any Hive config to provide the zookeeper quorum for the HBase
> >> cluster since I got Hive and HBase on separate clusters?
> >>
> >> Thanks!
> >>
> >> On Tue, Jun 9, 2015 at 12:03 AM, Nick Dimiduk 
> wrote:
> >>
> >>> Hi there.
> >>>
> >>> I go through a complete example in this pair of blog posts [0], [1].
> >>> Basically, create the table with the storage handler, without EXTERNAL
> >>> and
> >>> it's lifecycle will be managed by hive.
> >>>
> >>> [0]: http://www.n10k.com/blog/hbase-via-hive-pt1/
> >>> [1]: http://www.n10k.com/blog/hbase-via-hive-pt2/
> >>>
> >>> On Fri, Jun 5, 2015 at 10:56 AM, Sean Busbey 
> >>> wrote:
> >>>
>  +user@hive
>  -user@hbase to bcc
> 
>  Hi!
> 
>  This question is better handled by the hive user list, so I've copied
>  them in and moved the hbase user list to bcc.
> 
>  On Fri, Jun 5, 2015 at 12:54 PM, Buntu Dev 
> wrote:
> 
> > Hi -
> >
> > Newbie question: I got Hive and HBase on different clusters and say
> > all
> > the
> > appropriate ports are open to connect Hive to HBase, then how to
> > create
> > a
> > Hive managed HBase table?
> >
> > Thanks!
> >
> 
> 
> 
>  --
>  Sean
> 
> >>>
> >>>
> >>
> >
>


Re: HBase and Hive integration

2015-06-22 Thread Buntu Dev
Thanks Sanjiv.

On 6/23/15, @Sanjiv Singh  wrote:
> Hi Buntu,
>
>
> Hive config to provide zookeeper quorum for the HBase cluster
>
>
> --hiveconf hbase.zookeeper.quorum=##
>
>
> Regards
> Sanjiv Singh
> Mob :  +091 9990-447-339
>
> On Fri, Jun 12, 2015 at 10:04 PM, Buntu Dev  wrote:
>
>> Thanks Nick for the write up. It was quite helpful for a newbie like me.
>>
>> Is there any Hive config to provide the zookeeper quorum for the HBase
>> cluster since I got Hive and HBase on separate clusters?
>>
>> Thanks!
>>
>> On Tue, Jun 9, 2015 at 12:03 AM, Nick Dimiduk  wrote:
>>
>>> Hi there.
>>>
>>> I go through a complete example in this pair of blog posts [0], [1].
>>> Basically, create the table with the storage handler, without EXTERNAL
>>> and
>>> it's lifecycle will be managed by hive.
>>>
>>> [0]: http://www.n10k.com/blog/hbase-via-hive-pt1/
>>> [1]: http://www.n10k.com/blog/hbase-via-hive-pt2/
>>>
>>> On Fri, Jun 5, 2015 at 10:56 AM, Sean Busbey 
>>> wrote:
>>>
 +user@hive
 -user@hbase to bcc

 Hi!

 This question is better handled by the hive user list, so I've copied
 them in and moved the hbase user list to bcc.

 On Fri, Jun 5, 2015 at 12:54 PM, Buntu Dev  wrote:

> Hi -
>
> Newbie question: I got Hive and HBase on different clusters and say
> all
> the
> appropriate ports are open to connect Hive to HBase, then how to
> create
> a
> Hive managed HBase table?
>
> Thanks!
>



 --
 Sean

>>>
>>>
>>
>


Re: HBase and Hive integration

2015-06-22 Thread @Sanjiv Singh
Hi Buntu,


Hive config to provide zookeeper quorum for the HBase cluster


--hiveconf hbase.zookeeper.quorum=##


Regards
Sanjiv Singh
Mob :  +091 9990-447-339

On Fri, Jun 12, 2015 at 10:04 PM, Buntu Dev  wrote:

> Thanks Nick for the write up. It was quite helpful for a newbie like me.
>
> Is there any Hive config to provide the zookeeper quorum for the HBase
> cluster since I got Hive and HBase on separate clusters?
>
> Thanks!
>
> On Tue, Jun 9, 2015 at 12:03 AM, Nick Dimiduk  wrote:
>
>> Hi there.
>>
>> I go through a complete example in this pair of blog posts [0], [1].
>> Basically, create the table with the storage handler, without EXTERNAL and
>> it's lifecycle will be managed by hive.
>>
>> [0]: http://www.n10k.com/blog/hbase-via-hive-pt1/
>> [1]: http://www.n10k.com/blog/hbase-via-hive-pt2/
>>
>> On Fri, Jun 5, 2015 at 10:56 AM, Sean Busbey  wrote:
>>
>>> +user@hive
>>> -user@hbase to bcc
>>>
>>> Hi!
>>>
>>> This question is better handled by the hive user list, so I've copied
>>> them in and moved the hbase user list to bcc.
>>>
>>> On Fri, Jun 5, 2015 at 12:54 PM, Buntu Dev  wrote:
>>>
 Hi -

 Newbie question: I got Hive and HBase on different clusters and say all
 the
 appropriate ports are open to connect Hive to HBase, then how to create
 a
 Hive managed HBase table?

 Thanks!

>>>
>>>
>>>
>>> --
>>> Sean
>>>
>>
>>
>


Re: Left function

2015-06-22 Thread @Sanjiv Singh
Or
You can wrrite UDF to avail LEFT function in Hive.

Follow :
http://snowplowanalytics.com/blog/2013/02/08/writing-hive-udfs-and-serdes/



Regards
Sanjiv Singh
Mob :  +091 9990-447-339

On Tue, Jun 16, 2015 at 3:13 PM, Nitin Pawar 
wrote:

> try using substr function
>
> On Tue, Jun 16, 2015 at 3:03 PM, Ravisankar Mani 
> wrote:
>
>> Hi every one,
>>
>>
>> how to get  leftmost length of characters from the string  in hive?
>>
>> In Mysql or sq has specific function
>>
>> "LEFT(string,length) "
>>
>> Could you please help any other way to achieve this scenario?
>>
>>
>> Regards
>> Ravisnkar
>>
>
>
>
> --
> Nitin Pawar
>


Re: hive cbo calciteplanner

2015-06-22 Thread @Sanjiv Singh
+ Dev

Regards
Sanjiv Singh
Mob :  +091 9990-447-339

On Wed, Jun 17, 2015 at 6:42 AM, wangzhenhua (G) 
wrote:

>  Hi all,
>
>  I'm reading the source code of Hive cbo (CalcaitePlanner), but I find it
> hard to follow.
> Listed below are some of the questions:
> 1. What's the relationship between HepPlanner and HiveVolcanoPlanner?
> 2. I don't have a clue about these concepts: clusters, traitDef and
> collectGarbage().
>
>  Thanks for any help.
>
>  --
>  best regards,
> -zhenhua
>


Re: Hive counters for records read and written

2015-06-22 Thread @Sanjiv Singh
+ Dev



Regards
Sanjiv Singh
Mob :  +091 9990-447-339

On Wed, Jun 17, 2015 at 7:46 PM, Hemanth Meka 
wrote:

>  Hi,
>
>
>  I can see that two new counters have been added for hive
> (RECORDS_IN/RECORDS_OUT) in hive 0.14.
>
>
>  Prior to this release which counters could be used to get the records
> read by hive job and records written.  Because i noticed that in hive 0.14
> for a few hive jobs i see map_input_records but the map_output_records
> counter is 0 but the job actually writes something to output table and the
> hive log also gives that count correctly.
>
>
>  In this case how else can we get records read and records written  in
> releases before 0.14.
>
>
>  Regards
>
> Hemanth
>


Re: Query Timeout

2015-06-22 Thread @Sanjiv Singh
Hi Ibrar

What is the value given for 

--hiveconf *hbase.master*=#

OR

--hiveconf *hbase.zookeeper.quorum*=


It seems from the error that HBase Server configuration are not correct.




Regards
Sanjiv Singh
Mob :  +091 9990-447-339

On Wed, Jun 17, 2015 at 4:31 PM, Ibrar Ahmed  wrote:

> I am able to fix that issue, but got another error
>
>
> [127.0.0.1:1] hive> CREATE TABLE IF NOT EXISTS pagecounts_hbase
> (rowkey STRING, pageviews STRING, bytes STRING) STORED BY
> 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES
> ('hbase.columns.mapping' = ':key,f:c1,f:c2') TBLPROPERTIES ('
> hbase.table.name' = 'pagecounts');
> [Hive Error]: Query returned non-zero code: 1, cause: FAILED: Execution
> Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask.
> MetaException(message:java.lang.IllegalArgumentException: Not a host:port
> pair: PBUF
> "
> ibrar-virtual-machine �� �߯��) ��
> at
> org.apache.hadoop.hbase.util.Addressing.parseHostname(Addressing.java:60)
> at org.apache.hadoop.hbase.ServerName.(ServerName.java:96)
> at
> org.apache.hadoop.hbase.ServerName.parseVersionedServerName(ServerName.java:278)
> at
> org.apache.hadoop.hbase.MasterAddressTracker.bytesToServerName(MasterAddressTracker.java:77)
> at
> org.apache.hadoop.hbase.MasterAddressTracker.getMasterAddress(MasterAddressTracker.java:61)
> at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getMaster(HConnectionManager.java:631)
> at
> org.apache.hadoop.hbase.client.HBaseAdmin.(HBaseAdmin.java:106)
>
> at
> org.apache.hadoop.hive.hbase.HBaseStorageHandler.getHBaseAdmin(HBaseStorageHandler.java:84)
> at
> org.apache.hadoop.hive.hbase.HBaseStorageHandler.preCreateTable(HBaseStorageHandler.java:162)
> at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:554)
> at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:547)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:89)
> at com.sun.proxy.$Proxy7.createTable(Unknown Source)
> at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:613)
> at
> org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:4194)
> at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:281)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:153)
> at
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1472)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1239)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1057)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:880)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:870)
> at
> org.apache.hadoop.hive.service.HiveServer$HiveServerHandler.execute(HiveServer.java:198)
> at
> org.apache.hadoop.hive.service.ThriftHive$Processor$execute.getResult(ThriftHive.java:644)
> at
> org.apache.hadoop.hive.service.ThriftHive$Processor$execute.getResult(ThriftHive.java:628)
> at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
> at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
> at
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:206)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
>
>
> On Wed, Jun 17, 2015 at 3:51 PM, Ibrar Ahmed 
> wrote:
>
>> Hi,
>>
>> Whats wrong with my settings?
>>
>> [127.0.0.1:1] hive> CREATE TABLE IF NOT EXISTS pagecounts_hbase
>> (rowkey STRING, pageviews STRING, bytes STRING) STORED BY
>> 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES
>> ('hbase.columns.mapping' = ':key,f:c1,f:c2') TBLPROPERTIES ('
>> hbase.table.name' = 'pagecounts');
>>
>> [Hive Error]: Query returned non-zero code: 1, cause: FAILED: Execution
>> Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask.
>> MetaException(message:MetaException(message:org.apache.hadoop.hbase.MasterNotRunningException:
>> Retried 10 times
>> at
>> org.apache.hadoop.hbase.client.HBaseAdmin.(HBaseAdmin.java:127)
>> at
>> org.apache.hadoop.hive.hbase.HBaseStorageHandler.getHBaseAdmin(HBaseStorageHandler.java:84)
>> at
>> org.apache.hadoop.hive.hbase.HBaseStorageHandler.preCreateTable(HBaseStorageHandler.java:162)
>> at
>> org.apache.h

Re: Re: delta file compact take no effect

2015-06-22 Thread r7raul1...@163.com
My hive version is 1.1.0



r7raul1...@163.com
 
From: Alan Gates
Date: 2015-06-18 23:25
To: user
Subject: Re: delta file compact take no effect
Which version of Hive are you running?  A number of deadlock issues were 
resolved in HIVE-10500 which was released in Hive 1.2.  Based on your log it 
appears it recovered properly from the deadlocks and did manage to compact.

Alan.

r7raul1...@163.com
June 17, 2015 at 18:09
It's work~~   But  I see some  ERROR and Deadlock .

2015-06-18 09:06:06,509 ERROR [test.oracle-22]: txn.CompactionTxnHandler 
(CompactionTxnHandler.java:findNextToCompact(194)) - Unable to select next 
element for compaction, ERROR: could not serialize access due to concurrent 
update 
2015-06-18 09:06:06,509 ERROR [test.oracle-27]: txn.CompactionTxnHandler 
(CompactionTxnHandler.java:findNextToCompact(194)) - Unable to select next 
element for compaction, ERROR: could not serialize access due to concurrent 
update 
2015-06-18 09:06:06,509 ERROR [test.oracle-28]: txn.CompactionTxnHandler 
(CompactionTxnHandler.java:findNextToCompact(194)) - Unable to select next 
element for compaction, ERROR: could not serialize access due to concurrent 
update 
2015-06-18 09:06:06,509 WARN [test.oracle-22]: txn.TxnHandler 
(TxnHandler.java:checkRetryable(916)) - Deadlock detected in findNextToCompact, 
trying again. 
2015-06-18 09:06:06,509 WARN [test.oracle-27]: txn.TxnHandler 
(TxnHandler.java:checkRetryable(916)) - Deadlock detected in findNextToCompact, 
trying again. 
2015-06-18 09:06:06,509 WARN [test.oracle-28]: txn.TxnHandler 
(TxnHandler.java:checkRetryable(916)) - Deadlock detected in findNextToCompact, 
trying again. 
2015-06-18 09:06:06,544 INFO [test.oracle-26]: compactor.Worker 
(Worker.java:run(140)) - Starting MAJOR compaction for default.u_data_txn 
2015-06-18 09:06:06,874 INFO [test.oracle-26]: impl.TimelineClientImpl 
(TimelineClientImpl.java:serviceInit(123)) - Timeline service address: 
http://192.168.117.117:8188/ws/v1/timeline/ 
2015-06-18 09:06:06,960 INFO [test.oracle-26]: client.RMProxy 
(RMProxy.java:createRMProxy(92)) - Connecting to ResourceManager at 
localhost/127.0.0.1:8032 
2015-06-18 09:06:07,175 INFO [test.oracle-26]: impl.TimelineClientImpl 
(TimelineClientImpl.java:serviceInit(123)) - Timeline service address: 
http://192.168.117.117:8188/ws/v1/timeline/ 
2015-06-18 09:06:07,176 INFO [test.oracle-26]: client.RMProxy 
(RMProxy.java:createRMProxy(92)) - Connecting to ResourceManager at 
localhost/127.0.0.1:8032 
2015-06-18 09:06:07,298 WARN [test.oracle-26]: mapreduce.JobSubmitter 
(JobSubmitter.java:copyAndConfigureFiles(150)) - Hadoop command-line option 
parsing not performed. Implement the Tool interface and execute your 
application with ToolRunner to remedy this. 
2015-06-18 09:06:07,777 INFO [test.oracle-26]: mapreduce.JobSubmitter 
(JobSubmitter.java:submitJobInternal(401)) - number of splits:2 
2015-06-18 09:06:07,876 INFO [test.oracle-26]: mapreduce.JobSubmitter 
(JobSubmitter.java:printTokens(484)) - Submitting tokens for job: 
job_1433398549746_0035 
2015-06-18 09:06:08,021 INFO [test.oracle-26]: impl.YarnClientImpl 
(YarnClientImpl.java:submitApplication(236)) - Submitted application 
application_1433398549746_0035 
2015-06-18 09:06:08,052 INFO [test.oracle-26]: mapreduce.Job 
(Job.java:submit(1299)) - The url to track the job: 
http://localhost:8088/proxy/application_1433398549746_0035/ 
2015-06-18 09:06:08,052 INFO [test.oracle-26]: mapreduce.Job 
(Job.java:monitorAndPrintJob(1344)) - Running job: job_1433398549746_0035 
2015-06-18 09:06:18,174 INFO [test.oracle-26]: mapreduce.Job 
(Job.java:monitorAndPrintJob(1365)) - Job job_1433398549746_0035 running in 
uber mode : false 
2015-06-18 09:06:18,176 INFO [test.oracle-26]: mapreduce.Job 
(Job.java:monitorAndPrintJob(1372)) - map 0% reduce 0% 
2015-06-18 09:06:23,232 INFO [test.oracle-26]: mapreduce.Job 
(Job.java:monitorAndPrintJob(1372)) - map 50% reduce 0% 
2015-06-18 09:06:28,262 INFO [test.oracle-26]: mapreduce.Job 
(Job.java:monitorAndPrintJob(1372)) - map 100% reduce 0% 
2015-06-18 09:06:28,273 INFO [test.oracle-26]: mapreduce.Job 
(Job.java:monitorAndPrintJob(1383)) - Job job_1433398549746_0035 completed 
successfully 
2015-06-18 09:06:28,327 INFO [test.oracle-26]: mapreduce.Job 
(Job.java:monitorAndPrintJob(1390)) - Counters: 30 



r7raul1...@163.com
r7raul1...@163.com
June 10, 2015 at 22:10

I use hive 1.1.0 on hadoop 2.5.0
After I do some update operation on table u_data_txn.
My table create many delta file like:
drwxr-xr-x - hdfs hive 0 2015-02-06 22:52 
/user/hive/warehouse/u_data_txn/delta_001_001 
-rw-r--r-- 3 hdfs supergroup 346453 2015-02-06 22:52 
/user/hive/warehouse/u_data_txn/delta_001_001/bucket_0 
-rw-r--r-- 3 hdfs supergroup 415924 2015-02-06 22:52 
/user/hive/warehouse/u_data_txn/delta_001_001/bucket_1 
drwxr-xr-x - hdfs hive 0 2015-02-06 22:58 
/user/hive/warehouse/u_data_txn/delta_002_002 
-rw-r--r-- 3 hdfs supergroup 807 2015-02-06 22:58 
/

RE: Updating hive metadata

2015-06-22 Thread Chagarlamudi, Prasanth
Thank you Dev and Ryan.

Repair table worked. One last question. Are there any known side effects for 
repair table?

Thanks
Prasanth Chagarlamudi

From: Ryan Harris [mailto:ryan.har...@zionsbancorp.com]
Sent: Thursday, June 18, 2015 11:54 PM
To: user@hive.apache.org
Subject: RE: Updating hive metadata

you *should* be able to do:

create my_table_2 like my_table;
dfs -cp /user/hive/warehouse/my_table/* /user/hive/warehouse/my_table_2/
MSCK repair table my_table_2;



From: Devopam Mittra 
[mailto:devo...@gmail.com]
Sent: Thursday, June 18, 2015 10:12 PM
To: user@hive.apache.org
Subject: Re: Updating hive metadata

hi Prasanth,
I would not suggest tweaking hive metastore info unless you have full knowledge 
of the entire tables that will get impacted due to such a change. And such 
things break a lot with upgrades since this is quite unmanageable manually.

Why don't you create my_managed_table_2 as type EXTERNAL and link it to the 
copied data in hdfs layer ..

regards
Dev

On Thu, Jun 18, 2015 at 11:40 PM, Chagarlamudi, Prasanth 
mailto:prasanth.chagarlam...@epsilon.com>> 
wrote:
Hello,
Is there a way to update metadata in hive?

Created database mydb;
Created my_point_table;
Created my_managed_table;
Insert into my_managed_table from my_point_table;

Now,
Create my_point_table_2;
//Copy data from hive managed_table to managed_table_2’s location
hdfs dfs –cp /user/hive/warehouse/mydb.db/my_managed_table 
/user/hive/warehouse/mydb.db/my_managed_table_2
At this point, I am expecting the following query
Select * from my_managed_table_2; to give me all the data I just 
copied from my_managed_table;

How do I update the hive metastore to consider the data that I copied to 
my_managed_table_2? Is that even possible?

Thanks in advance
Prasanth Chagarlamudi





This e-mail and files transmitted with it are confidential, and are intended 
solely for the use of the individual or entity to whom this e-mail is 
addressed. If you are not the intended recipient, or the employee or agent 
responsible to deliver it to the intended recipient, you are hereby notified 
that any dissemination, distribution or copying of this communication is 
strictly prohibited. If you are not one of the named recipient(s) or otherwise 
have reason to believe that you received this message in error, please 
immediately notify sender by e-mail, and destroy the original message. Thank 
You.



--
Devopam Mittra
Life and Relations are not binary

THIS ELECTRONIC MESSAGE, INCLUDING ANY ACCOMPANYING DOCUMENTS, IS CONFIDENTIAL 
and may contain information that is privileged and exempt from disclosure under 
applicable law. If you are neither the intended recipient nor responsible for 
delivering the message to the intended recipient, please note that any 
dissemination, distribution, copying or the taking of any action in reliance 
upon the message is strictly prohibited. If you have received this 
communication in error, please notify the sender immediately. Thank you.



This e-mail and files transmitted with it are confidential, and are intended 
solely for the use of the individual or entity to whom this e-mail is 
addressed. If you are not the intended recipient, or the employee or agent 
responsible to deliver it to the intended recipient, you are hereby notified 
that any dissemination, distribution or copying of this communication is 
strictly prohibited. If you are not one of the named recipient(s) or otherwise 
have reason to believe that you received this message in error, please 
immediately notify sender by e-mail, and destroy the original message. Thank 
You.