keith-turner commented on code in PR #5726:
URL: https://github.com/apache/accumulo/pull/5726#discussion_r2198421634


##########
server/base/src/main/java/org/apache/accumulo/server/compaction/FileCompactor.java:
##########
@@ -560,9 +582,15 @@ private void compactLocalityGroup(String lgName, 
Set<ByteSequence> columnFamilie
       SystemIteratorEnvironment iterEnv =
           env.createIteratorEnv(context, acuTableConf, getExtent().tableId());
 
-      SortedKeyValueIterator<Key,Value> itr = 
iterEnv.getTopLevelIterator(IteratorConfigUtil
-          .convertItersAndLoad(env.getIteratorScope(), cfsi, acuTableConf, 
iterators, iterEnv));
+      SortedKeyValueIterator<Key,Value> stack = null;
+      try {
+        stack = IteratorConfigUtil.convertItersAndLoad(env.getIteratorScope(), 
cfsi, acuTableConf,

Review Comment:
   Wondering if this fix is too narrow.  Maybe we want to do something more 
general like the following.
   
    * HAve a configurable consecutive compaction failure count that cause 
process death
    * Do exponential backoff between failed compactions.
   
   This would more gracefully deal with consistently failing compactions that 
happen for any reason.  Like if a compactor fails to compact 10 times in a row 
after backing off between each attempt, just exit the process.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to