jcamachor commented on a change in pull request #1151:
URL: https://github.com/apache/hive/pull/1151#discussion_r443269785



##########
File path: common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
##########
@@ -4979,10 +4979,11 @@ private static void 
populateLlapDaemonVarsSet(Set<String> llapDaemonVarsSetLocal
 
     HIVE_QUERY_REEXECUTION_ENABLED("hive.query.reexecution.enabled", true,
         "Enable query reexecutions"),
-    HIVE_QUERY_REEXECUTION_STRATEGIES("hive.query.reexecution.strategies", 
"overlay,reoptimize",
+    HIVE_QUERY_REEXECUTION_STRATEGIES("hive.query.reexecution.strategies", 
"overlay,reoptimize,retrylock",

Review comment:
       Do we really want `retrylock` to be driven by a config? Shouldn't it 
just be enabled?

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/Driver.java
##########
@@ -675,50 +678,18 @@ private void runInternal(String command, boolean 
alreadyCompiled) throws Command
 
       try {
         if (!validTxnManager.isValidTxnListState()) {
-          LOG.info("Compiling after acquiring locks");
+          LOG.info("Reexecuting after acquiring locks, since snapshot was 
outdated.");
           // Snapshot was outdated when locks were acquired, hence regenerate 
context,
-          // txn list and retry
-          // TODO: Lock acquisition should be moved before analyze, this is a 
bit hackish.
-          // Currently, we acquire a snapshot, we compile the query wrt that 
snapshot,
-          // and then, we acquire locks. If snapshot is still valid, we 
continue as usual.
-          // But if snapshot is not valid, we recompile the query.
-          if (driverContext.isOutdatedTxn()) {
-            driverContext.getTxnManager().rollbackTxn();
-
-            String userFromUGI = DriverUtils.getUserFromUGI(driverContext);
-            driverContext.getTxnManager().openTxn(context, userFromUGI, 
driverContext.getTxnType());
-            lockAndRespond();
-          }
-          driverContext.setRetrial(true);
-          driverContext.getBackupContext().addSubContext(context);
-          
driverContext.getBackupContext().setHiveLocks(context.getHiveLocks());
-          context = driverContext.getBackupContext();
-          driverContext.getConf().set(ValidTxnList.VALID_TXNS_KEY,
-            driverContext.getTxnManager().getValidTxns().toString());
-          if (driverContext.getPlan().hasAcidResourcesInQuery()) {
-            validTxnManager.recordValidWriteIds();
-          }
-
-          if (!alreadyCompiled) {
-            // compile internal will automatically reset the perf logger
-            compileInternal(command, true);
-          } else {
-            // Since we're reusing the compiled plan, we need to update its 
start time for current run
-            
driverContext.getPlan().setQueryStartTime(driverContext.getQueryDisplay().getQueryStartTime());
-          }
-
-          if (!validTxnManager.isValidTxnListState()) {
-            // Throw exception
-            throw handleHiveException(new HiveException("Operation could not 
be executed"), 14);
+          // txn list and retry (see ReExecutionRetryLockPlugin)
+          try {
+            releaseLocksAndCommitOrRollback(false);
+          } catch (LockException e) {
+            handleHiveException(e, 12);
           }
-
-          //Reset the PerfLogger
-          perfLogger = SessionState.getPerfLogger(true);
-
-          // the reason that we set the txn manager for the cxt here is 
because each
-          // query has its own ctx object. The txn mgr is shared across the
-          // same instance of Driver, which can run multiple queries.
-          context.setHiveTxnManager(driverContext.getTxnManager());
+          throw handleHiveException(

Review comment:
       I think this makes behavior wrt original logic slightly different. For 
instance, if another transaction obtains the locks in between the moment that 
this transaction releases them and is going to acquire them again, does this 
mean the transaction would fail for a second time? If that is the case, should 
we have a specific configuration for the number of retries in this case? It 
seems for the default re-execution the number of retries is `1`, but in this 
case, we could retry several times before failing the query.

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/DriverFactory.java
##########
@@ -64,6 +65,9 @@ private static IReExecutionPlugin buildReExecPlugin(String 
name) throws RuntimeE
     if (name.equals("reoptimize")) {
       return new ReOptimizePlugin();
     }
+    if (name.endsWith("retrylock")) {

Review comment:
       `equals` ?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to