[ 
https://issues.apache.org/jira/browse/HADOOP-19569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17955180#comment-17955180
 ] 

ASF GitHub Bot commented on HADOOP-19569:
-----------------------------------------

Copilot commented on code in PR #7700:
URL: https://github.com/apache/hadoop/pull/7700#discussion_r2115659154


##########
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestBlockingThreadPoolExecutorService.java:
##########
@@ -16,71 +16,77 @@
  * limitations under the License.
  */
 
-package org.apache.hadoop.fs.s3a;
-
-import org.apache.hadoop.util.BlockingThreadPoolExecutorService;
-import org.apache.hadoop.util.SemaphoredDelegatingExecutor;
-import org.apache.hadoop.util.StopWatch;
-
-import org.junit.AfterClass;
-import org.junit.Rule;
-import org.junit.Test;
-import org.junit.rules.Timeout;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
+package org.apache.hadoop.util;
 
 import java.util.concurrent.Callable;
 import java.util.concurrent.CountDownLatch;
 import java.util.concurrent.ExecutorService;
 import java.util.concurrent.Future;
+import java.util.concurrent.RejectedExecutionException;
 import java.util.concurrent.TimeUnit;
 
-import static org.junit.Assert.assertEquals;
+import org.assertj.core.api.Assertions;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.test.AbstractHadoopTestBase;
+
+import static org.apache.hadoop.test.LambdaTestUtils.intercept;
 
 /**
- * Basic test for S3A's blocking executor service.
+ * Test for the blocking executor service.
  */
-public class ITestBlockingThreadPoolExecutorService {
+public class TestBlockingThreadPoolExecutorService extends 
AbstractHadoopTestBase {
 
   private static final Logger LOG = LoggerFactory.getLogger(
-      ITestBlockingThreadPoolExecutorService.class);
+      TestBlockingThreadPoolExecutorService.class);
 
   private static final int NUM_ACTIVE_TASKS = 4;
+
   private static final int NUM_WAITING_TASKS = 2;
+
   private static final int TASK_SLEEP_MSEC = 100;
+
   private static final int SHUTDOWN_WAIT_MSEC = 200;
+
   private static final int SHUTDOWN_WAIT_TRIES = 5;
+
   private static final int BLOCKING_THRESHOLD_MSEC = 50;
 
   private static final Integer SOME_VALUE = 1337;
 
-  private static BlockingThreadPoolExecutorService tpe;
+  private BlockingThreadPoolExecutorService tpe;

Review Comment:
   [nitpick] Since the thread pool executor is now an instance variable with 
setup/teardown methods, ensure that each test properly initializes and destroys 
the executor to avoid interference between tests.



##########
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/BlockingThreadPoolExecutorService.java:
##########
@@ -130,21 +130,20 @@ public static BlockingThreadPoolExecutorService 
newInstance(
     slower than enqueueing. */
     final BlockingQueue<Runnable> workQueue =
         new LinkedBlockingQueue<>(waitingTasks + activeTasks);
+    final InnerExecutorRejection rejection = new InnerExecutorRejection();

Review Comment:
   [nitpick] The InnerExecutorRejection handler now shuts down the service upon 
rejection. Consider enhancing the error handling logic or adding more detailed 
documentation to explain the shutdown behavior in case of task rejection.





> S3A: stream write/close fails badly once FS is closed
> -----------------------------------------------------
>
>                 Key: HADOOP-19569
>                 URL: https://issues.apache.org/jira/browse/HADOOP-19569
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs/s3
>    Affects Versions: 3.5.0
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>            Priority: Minor
>              Labels: pull-request-available
>
> when closing a process during a large upload, and NPE is triggered in the 
> abort call. This is because the S3 client has already been released.
> {code}
> java.lang.NullPointerException
>         at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$abortMultipartUpload$41(S3AFileSystem.java:5337)
>         at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:547)
>         at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:528)
>         at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:449)
>         at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.abortMultipartUpload(S3AFileSystem.java:5336)
>         at 
> org.apache.hadoop.fs.s3a.WriteOperationHelper.lambda$abortMultipartUpload$4(WriteOperationHelper.java:392)
> {code}
> * close() in small writes also fails, just with a different exception
> * and on some large writes, the output stream hangs as it awaits the end of 
> the queued writes. This is a problem inside the semaphore executor



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to