[ 
https://issues.apache.org/jira/browse/HIVE-22977?focusedWorklogId=841208&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-841208
 ]

ASF GitHub Bot logged work on HIVE-22977:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 23/Jan/23 18:32
            Start Date: 23/Jan/23 18:32
    Worklog Time Spent: 10m 
      Work Description: SourabhBadhya commented on code in PR #3801:
URL: https://github.com/apache/hive/pull/3801#discussion_r1084407044


##########
ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/MergeCompactor.java:
##########
@@ -0,0 +1,210 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hive.ql.txn.compactor;
+
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.common.ValidWriteIdList;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.metastore.api.Partition;
+import org.apache.hadoop.hive.metastore.api.StorageDescriptor;
+import org.apache.hadoop.hive.metastore.api.Table;
+import org.apache.hadoop.hive.metastore.txn.CompactionInfo;
+import org.apache.hadoop.hive.ql.io.AcidDirectory;
+import org.apache.hadoop.hive.ql.io.AcidOutputFormat;
+import org.apache.hadoop.hive.ql.io.AcidUtils;
+import org.apache.hadoop.hive.ql.io.orc.OrcFile;
+import org.apache.hadoop.hive.ql.io.orc.Reader;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.regex.Matcher;
+
+final class MergeCompactor extends QueryCompactor {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(MergeCompactor.class.getName());
+
+  @Override
+  public boolean run(HiveConf hiveConf, Table table, Partition partition, 
StorageDescriptor storageDescriptor,
+                  ValidWriteIdList writeIds, CompactionInfo compactionInfo, 
AcidDirectory dir) throws IOException, HiveException, InterruptedException {
+    if (isMergeCompaction(hiveConf, dir, writeIds, storageDescriptor)) {
+      // Only inserts happened, it is much more performant to merge the files 
than running a query
+      Path outputDirPath = getCompactionOutputDirPath(hiveConf, writeIds,
+              compactionInfo.isMajorCompaction(), storageDescriptor);
+      try {
+        return mergeOrcFiles(hiveConf, compactionInfo.isMajorCompaction(),
+                dir, outputDirPath, 
AcidUtils.isInsertOnlyTable(table.getParameters()));
+      } catch (Throwable t) {
+        // Error handling, just delete the output directory,
+        // and fall back to query based compaction.
+        FileSystem fs = outputDirPath.getFileSystem(hiveConf);
+        if (fs.exists(outputDirPath)) {
+          fs.delete(outputDirPath, true);
+        }
+        return false;
+      }
+    } else {
+      return false;
+    }
+  }
+
+  /**
+   * Returns whether merge compaction must be enabled or not.
+   * @param conf Hive configuration
+   * @param directory the directory to be scanned
+   * @param validWriteIdList list of valid write IDs
+   * @param storageDescriptor storage descriptor of the underlying table
+   * @return true, if merge compaction must be enabled
+   */
+  private boolean isMergeCompaction(HiveConf conf, AcidDirectory directory,
+                                   ValidWriteIdList validWriteIdList,
+                                   StorageDescriptor storageDescriptor) {
+    return conf.getBoolVar(HiveConf.ConfVars.HIVE_MERGE_COMPACTION_ENABLED)

Review Comment:
   Done.





Issue Time Tracking
-------------------

    Worklog Id:     (was: 841208)
    Time Spent: 7h 10m  (was: 7h)

> Merge delta files instead of running a query in major/minor compaction
> ----------------------------------------------------------------------
>
>                 Key: HIVE-22977
>                 URL: https://issues.apache.org/jira/browse/HIVE-22977
>             Project: Hive
>          Issue Type: Improvement
>            Reporter: László Pintér
>            Assignee: Sourabh Badhya
>            Priority: Major
>              Labels: pull-request-available
>         Attachments: HIVE-22977.01.patch, HIVE-22977.02.patch
>
>          Time Spent: 7h 10m
>  Remaining Estimate: 0h
>
> [Compaction Optimiziation]
> We should analyse the possibility to move a delta file instead of running a 
> major/minor compaction query.
> Please consider the following use cases:
>  - full acid table but only insert queries were run. This means that no 
> delete delta directories were created. Is it possible to merge the delta 
> directory contents without running a compaction query?
>  - full acid table, initiating queries through the streaming API. If there 
> are no abort transactions during the streaming, is it possible to merge the 
> delta directory contents without running a compaction query?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to