[ 
https://issues.apache.org/jira/browse/HADOOP-17847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17631386#comment-17631386
 ] 

duhanmin edited comment on HADOOP-17847 at 11/10/22 2:01 AM:
-------------------------------------------------------------

我在使用TextOutputFormat.getRecordWriter 上传文件时也遇到类似错误

hadoop:hadoop-aws-3.2.1-amzn-4
{code:java}
//log

2022-11-10 01:43:44.337 [0-0-0-writer] WARN  S3AInstrumentation - Closing 
output stream statistics while data is still marked as pending upload in 
OutputStreamStatistics{blocksSubmitted=1, blocksInQueue=1, blocksActive=0, 
blockUploadsCompleted=0, blockUploadsFailed=0, bytesPendingUpload=55436789, 
bytesUploaded=0, blocksAllocated=1, blocksReleased=1, 
blocksActivelyAllocated=0, exceptionsInMultipartFinalize=0, transferDuration=0 
ms, queueDuration=0 ms, averageQueueTime=0 ms, totalUploadDuration=0 ms, 
effectiveBandwidth=0.0 bytes/s}
2022-11-10 01:43:44.343 [0-0-0-writer] ERROR S3Writer$Task - error
java.io.IOException: regular upload failed: java.lang.NullPointerException
    at org.apache.hadoop.fs.s3a.S3AUtils.extractException(S3AUtils.java:303) 
~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.putObject(S3ABlockOutputStream.java:453)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.close(S3ABlockOutputStream.java:365)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:73)
 ~[hadoop-common-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:102) 
~[hadoop-common-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.mapred.TextOutputFormat$LineRecordWriter.close(TextOutputFormat.java:99)
 ~[hadoop-mapreduce-client-core-3.2.1-amzn-4.jar:na]
    at java.lang.Thread.run(Thread.java:748) [na:1.8.0_312]
Caused by: java.lang.NullPointerException: null
    at 
org.apache.hadoop.fs.s3a.S3AFileSystem.incrementStatistic(S3AFileSystem.java:1189)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.S3AFileSystem.incrementStatistic(S3AFileSystem.java:1179)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.S3AFileSystem.incrementPutStartStatistics(S3AFileSystem.java:1649)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.S3AFileSystem.putObjectDirect(S3AFileSystem.java:1584) 
~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.WriteOperationHelper.lambda$putObject$5(WriteOperationHelper.java:430)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109) 
~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:265) 
~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:322) 
~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:261) 
~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:236) 
~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.WriteOperationHelper.retry(WriteOperationHelper.java:123)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.WriteOperationHelper.putObject(WriteOperationHelper.java:428)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.lambda$putObject$0(S3ABlockOutputStream.java:438)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.util.SemaphoredDelegatingExecutor$CallableWithPermitRelease.call(SemaphoredDelegatingExecutor.java:219)
 ~[hadoop-common-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.util.SemaphoredDelegatingExecutor$CallableWithPermitRelease.call(SemaphoredDelegatingExecutor.java:219)
 ~[hadoop-common-3.2.1-amzn-4.jar:na]
    at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_312]
    at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
~[na:1.8.0_312]
    at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
~[na:1.8.0_312]
    ... 1 common frames omitted{code}
 


was (Author: JIRAUSER292033):
我在使用TextOutputFormat.getRecordWriter 上传文件时也遇到类似错误

 

 

 
{code:java}
//log

2022-11-10 01:43:44.337 [0-0-0-writer] WARN  S3AInstrumentation - Closing 
output stream statistics while data is still marked as pending upload in 
OutputStreamStatistics{blocksSubmitted=1, blocksInQueue=1, blocksActive=0, 
blockUploadsCompleted=0, blockUploadsFailed=0, bytesPendingUpload=55436789, 
bytesUploaded=0, blocksAllocated=1, blocksReleased=1, 
blocksActivelyAllocated=0, exceptionsInMultipartFinalize=0, transferDuration=0 
ms, queueDuration=0 ms, averageQueueTime=0 ms, totalUploadDuration=0 ms, 
effectiveBandwidth=0.0 bytes/s}
2022-11-10 01:43:44.343 [0-0-0-writer] ERROR S3Writer$Task - error
java.io.IOException: regular upload failed: java.lang.NullPointerException
    at org.apache.hadoop.fs.s3a.S3AUtils.extractException(S3AUtils.java:303) 
~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.putObject(S3ABlockOutputStream.java:453)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.close(S3ABlockOutputStream.java:365)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:73)
 ~[hadoop-common-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:102) 
~[hadoop-common-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.mapred.TextOutputFormat$LineRecordWriter.close(TextOutputFormat.java:99)
 ~[hadoop-mapreduce-client-core-3.2.1-amzn-4.jar:na]
    at java.lang.Thread.run(Thread.java:748) [na:1.8.0_312]
Caused by: java.lang.NullPointerException: null
    at 
org.apache.hadoop.fs.s3a.S3AFileSystem.incrementStatistic(S3AFileSystem.java:1189)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.S3AFileSystem.incrementStatistic(S3AFileSystem.java:1179)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.S3AFileSystem.incrementPutStartStatistics(S3AFileSystem.java:1649)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.S3AFileSystem.putObjectDirect(S3AFileSystem.java:1584) 
~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.WriteOperationHelper.lambda$putObject$5(WriteOperationHelper.java:430)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109) 
~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:265) 
~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:322) 
~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:261) 
~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:236) 
~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.WriteOperationHelper.retry(WriteOperationHelper.java:123)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.WriteOperationHelper.putObject(WriteOperationHelper.java:428)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.lambda$putObject$0(S3ABlockOutputStream.java:438)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.util.SemaphoredDelegatingExecutor$CallableWithPermitRelease.call(SemaphoredDelegatingExecutor.java:219)
 ~[hadoop-common-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.util.SemaphoredDelegatingExecutor$CallableWithPermitRelease.call(SemaphoredDelegatingExecutor.java:219)
 ~[hadoop-common-3.2.1-amzn-4.jar:na]
    at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_312]
    at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
~[na:1.8.0_312]
    at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
~[na:1.8.0_312]
    ... 1 common frames omitted{code}
 

> S3AInstrumentation Closing output stream statistics while data is still 
> marked as pending upload in OutputStreamStatistics
> --------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-17847
>                 URL: https://issues.apache.org/jira/browse/HADOOP-17847
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs/s3
>    Affects Versions: 3.2.1
>         Environment: hadoop: 3.2.1
> spark: 3.0.2
> k8s server version: 1.18
> aws.java.sdk.bundle.version:1.11.1033
>            Reporter: Li Rong
>            Priority: Major
>         Attachments: logs.txt
>
>
> When using hadoop s3a file upload for spark event Logs, the logs were queued 
> up and not uploaded before the process is shut down:
> {code:java}
> // 21/08/13 12:22:39 WARN ExecutorPodsWatchSnapshotSource: Kubernetes client 
> has been closed (this is expected if the application is shutting down.)
> 21/08/13 12:22:39 WARN S3AInstrumentation: Closing output stream statistics 
> while data is still marked as pending upload in 
> OutputStreamStatistics{blocksSubmitted=1, blocksInQueue=1, blocksActive=0, 
> blockUploadsCompleted=0, blockUploadsFailed=0, bytesPendingUpload=106716, 
> bytesUploaded=0, blocksAllocated=1, blocksReleased=1, 
> blocksActivelyAllocated=0, exceptionsInMultipartFinalize=0, 
> transferDuration=0 ms, queueDuration=0 ms, averageQueueTime=0 ms, 
> totalUploadDuration=0 ms, effectiveBandwidth=0.0 bytes/s}{code}
> details see logs attached



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to