Which version of Hive are you running? A number of deadlock issues were resolved in HIVE-10500 which was released in Hive 1.2. Based on your log it appears it recovered properly from the deadlocks and did manage to compact.

Alan.

r7raul1...@163.com <mailto:r7raul1...@163.com>
June 17, 2015 at 18:09
It's work~~   But  I see some ERROR and Deadlock .

2015-06-18 09:06:06,509 ERROR [test.oracle-22]: txn.CompactionTxnHandler (CompactionTxnHandler.java:findNextToCompact(194)) - Unable to select next element for compaction, ERROR: could not serialize access due to concurrent update 2015-06-18 09:06:06,509 ERROR [test.oracle-27]: txn.CompactionTxnHandler (CompactionTxnHandler.java:findNextToCompact(194)) - Unable to select next element for compaction, ERROR: could not serialize access due to concurrent update 2015-06-18 09:06:06,509 ERROR [test.oracle-28]: txn.CompactionTxnHandler (CompactionTxnHandler.java:findNextToCompact(194)) - Unable to select next element for compaction, ERROR: could not serialize access due to concurrent update 2015-06-18 09:06:06,509 WARN [test.oracle-22]: txn.TxnHandler (TxnHandler.java:checkRetryable(916)) - Deadlock detected in findNextToCompact, trying again. 2015-06-18 09:06:06,509 WARN [test.oracle-27]: txn.TxnHandler (TxnHandler.java:checkRetryable(916)) - Deadlock detected in findNextToCompact, trying again. 2015-06-18 09:06:06,509 WARN [test.oracle-28]: txn.TxnHandler (TxnHandler.java:checkRetryable(916)) - Deadlock detected in findNextToCompact, trying again. 2015-06-18 09:06:06,544 INFO [test.oracle-26]: compactor.Worker (Worker.java:run(140)) - Starting MAJOR compaction for default.u_data_txn 2015-06-18 09:06:06,874 INFO [test.oracle-26]: impl.TimelineClientImpl (TimelineClientImpl.java:serviceInit(123)) - Timeline service address: http://192.168.117.117:8188/ws/v1/timeline/ 2015-06-18 09:06:06,960 INFO [test.oracle-26]: client.RMProxy (RMProxy.java:createRMProxy(92)) - Connecting to ResourceManager at localhost/127.0.0.1:8032 2015-06-18 09:06:07,175 INFO [test.oracle-26]: impl.TimelineClientImpl (TimelineClientImpl.java:serviceInit(123)) - Timeline service address: http://192.168.117.117:8188/ws/v1/timeline/ 2015-06-18 09:06:07,176 INFO [test.oracle-26]: client.RMProxy (RMProxy.java:createRMProxy(92)) - Connecting to ResourceManager at localhost/127.0.0.1:8032 2015-06-18 09:06:07,298 WARN [test.oracle-26]: mapreduce.JobSubmitter (JobSubmitter.java:copyAndConfigureFiles(150)) - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 2015-06-18 09:06:07,777 INFO [test.oracle-26]: mapreduce.JobSubmitter (JobSubmitter.java:submitJobInternal(401)) - number of splits:2 2015-06-18 09:06:07,876 INFO [test.oracle-26]: mapreduce.JobSubmitter (JobSubmitter.java:printTokens(484)) - Submitting tokens for job: job_1433398549746_0035 2015-06-18 09:06:08,021 INFO [test.oracle-26]: impl.YarnClientImpl (YarnClientImpl.java:submitApplication(236)) - Submitted application application_1433398549746_0035 2015-06-18 09:06:08,052 INFO [test.oracle-26]: mapreduce.Job (Job.java:submit(1299)) - The url to track the job: http://localhost:8088/proxy/application_1433398549746_0035/ 2015-06-18 09:06:08,052 INFO [test.oracle-26]: mapreduce.Job (Job.java:monitorAndPrintJob(1344)) - Running job: job_1433398549746_0035 2015-06-18 09:06:18,174 INFO [test.oracle-26]: mapreduce.Job (Job.java:monitorAndPrintJob(1365)) - Job job_1433398549746_0035 running in uber mode : false 2015-06-18 09:06:18,176 INFO [test.oracle-26]: mapreduce.Job (Job.java:monitorAndPrintJob(1372)) - map 0% reduce 0% 2015-06-18 09:06:23,232 INFO [test.oracle-26]: mapreduce.Job (Job.java:monitorAndPrintJob(1372)) - map 50% reduce 0% 2015-06-18 09:06:28,262 INFO [test.oracle-26]: mapreduce.Job (Job.java:monitorAndPrintJob(1372)) - map 100% reduce 0% 2015-06-18 09:06:28,273 INFO [test.oracle-26]: mapreduce.Job (Job.java:monitorAndPrintJob(1383)) - Job job_1433398549746_0035 completed successfully 2015-06-18 09:06:28,327 INFO [test.oracle-26]: mapreduce.Job (Job.java:monitorAndPrintJob(1390)) - Counters: 30

------------------------------------------------------------------------
r7raul1...@163.com
r7raul1...@163.com <mailto:r7raul1...@163.com>
June 10, 2015 at 22:10

I use hive 1.1.0 on hadoop 2.5.0
After I do some update operation on table u_data_txn.
My table create many delta file like:
drwxr-xr-x - hdfs hive 0 2015-02-06 22:52 /user/hive/warehouse/u_data_txn/delta_0000001_0000001 -rw-r--r-- 3 hdfs supergroup 346453 2015-02-06 22:52 /user/hive/warehouse/u_data_txn/delta_0000001_0000001/bucket_00000 -rw-r--r-- 3 hdfs supergroup 415924 2015-02-06 22:52 /user/hive/warehouse/u_data_txn/delta_0000001_0000001/bucket_00001 drwxr-xr-x - hdfs hive 0 2015-02-06 22:58 /user/hive/warehouse/u_data_txn/delta_0000002_0000002 -rw-r--r-- 3 hdfs supergroup 807 2015-02-06 22:58 /user/hive/warehouse/u_data_txn/delta_0000002_0000002/bucket_00000 -rw-r--r-- 3 hdfs supergroup 779 2015-02-06 22:58 /user/hive/warehouse/u_data_txn/delta_0000002_0000002/bucket_00001 drwxr-xr-x - hdfs hive 0 2015-02-06 22:59 /user/hive/warehouse/u_data_txn/delta_0000003_0000003 -rw-r--r-- 3 hdfs supergroup 817 2015-02-06 22:59 /user/hive/warehouse/u_data_txn/delta_0000003_0000003/bucket_00000 -rw-r--r-- 3 hdfs supergroup 767 2015-02-06 22:59 /user/hive/warehouse/u_data_txn/delta_0000003_0000003/bucket_00001 drwxr-xr-x - hdfs hive 0 2015-02-06 23:01 /user/hive/warehouse/u_data_txn/delta_0000004_0000004 -rw-r--r-- 3 hdfs supergroup 817 2015-02-06 23:01 /user/hive/warehouse/u_data_txn/delta_0000004_0000004/bucket_00000 -rw-r--r-- 3 hdfs supergroup 779 2015-02-06 23:01 /user/hive/warehouse/u_data_txn/delta_0000004_0000004/bucket_00001 drwxr-xr-x - hdfs hive 0 2015-02-06 23:03 /user/hive/warehouse/u_data_txn/delta_0000005_0000005 -rw-r--r-- 3 hdfs supergroup 817 2015-02-06 23:03 /user/hive/warehouse/u_data_txn/delta_0000005_0000005/bucket_00000 -rw-r--r-- 3 hdfs supergroup 779 2015-02-06 23:03 /user/hive/warehouse/u_data_txn/delta_0000005_0000005/bucket_00001 drwxr-xr-x - hdfs hive 0 2015-02-10 21:34 /user/hive/warehouse/u_data_txn/delta_0000006_0000006 -rw-r--r-- 3 hdfs supergroup 821 2015-02-10 21:34 /user/hive/warehouse/u_data_txn/delta_0000006_0000006/bucket_00000 drwxr-xr-x - hdfs hive 0 2015-02-10 21:35 /user/hive/warehouse/u_data_txn/delta_0000007_0000007 -rw-r--r-- 3 hdfs supergroup 821 2015-02-10 21:35 /user/hive/warehouse/u_data_txn/delta_0000007_0000007/bucket_00000 drwxr-xr-x - hdfs hive 0 2015-03-24 01:16 /user/hive/warehouse/u_data_txn/delta_0000008_0000008 -rw-r--r-- 3 hdfs supergroup 1670 2015-03-24 01:16 /user/hive/warehouse/u_data_txn/delta_0000008_0000008/bucket_00000 -rw-r--r-- 3 hdfs supergroup 1767 2015-03-24 01:16 /user/hive/warehouse/u_data_txn/delta_0000008_0000008/bucket_00001

*I try ALTER TABLE u_data_txn COMPACT 'MAJOR';
The delta still exist.
Then I try ALTER TABLE u_data_txn COMPACT 'MINOR';
The delta still exist.
How to  merge delta file?*
*
*
*My config is:*
<property>
<name>hive.support.concurrency</name>
<value>true</value>
</property>
<property>
<name>hive.enforce.bucketing</name>
<value>true</value>
</property>
<property>
<name>hive.exe.dynamic.partition.mode</name>
<value>nonstrict</value>
</property>
<property>
<name>hive.txn.manager</name>
<value>org.apache.hadoop.hive.ql.lockmgr.DbTxnManager</value>
</property>
<property>
<name>hive.compactor.initiator.on</name>
<value>true</value>
</property>
<property>
<name>hive.compactor.worker.threads</name>
<value>4</value>
</property>
------------------------------------------------------------------------
r7raul1...@163.com

Reply via email to