[ 
https://issues.apache.org/jira/browse/HADOOP-17847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17631386#comment-17631386
 ] 

duhanmin edited comment on HADOOP-17847 at 11/10/22 8:06 AM:
-------------------------------------------------------------

I use TextOutputFormat getRecordWriter encounter similar mistakes when 
uploading files.
{code:java}
//log


2022-11-10 07:36:39.657 [0-0-0-writer] DEBUG S3ABlockOutputStream - 
S3ABlockOutputStream{WriteOperationHelper {bucket=prod-lb-quote-data-opra}, 
blockSize=104857600, activeBlock=FileBlock{index=1, 
destFile=/mnt/var/lib/hadoop/tmp/s3a/s3ablock-0001-3171146831518189026.tmp, 
state=Writing, dataSize=57483314, limit=104857600}}: Closing block #1: current 
block= FileBlock{index=1, 
destFile=/mnt/var/lib/hadoop/tmp/s3a/s3ablock-0001-3171146831518189026.tmp, 
state=Writing, dataSize=57483314, limit=104857600}
2022-11-10 07:36:39.657 [0-0-0-writer] DEBUG S3ABlockOutputStream - Executing 
regular upload for WriteOperationHelper {bucket=prod-lb-quote-data-opra}
2022-11-10 07:36:39.657 [0-0-0-writer] DEBUG S3ADataBlocks - Start datablock[1] 
upload
2022-11-10 07:36:39.657 [0-0-0-writer] DEBUG S3ADataBlocks - FileBlock{index=1, 
destFile=/mnt/var/lib/hadoop/tmp/s3a/s3ablock-0001-3171146831518189026.tmp, 
state=Writing, dataSize=57483314, limit=104857600}: entering state Upload
2022-11-10 07:36:39.660 [0-0-0-writer] DEBUG S3ABlockOutputStream - Clearing 
active block
2022-11-10 07:36:39.661 [s3a-transfer-shared-pool1-t1] DEBUG S3AFileSystem - 
PUT 57483314 bytes to 
data_studio/lb/statistic_financial_tmp/20221110_073635_356_2b6a2aa765db483f9fd591f00c2a6826.text
2022-11-10 07:36:39.661 [s3a-transfer-shared-pool1-t1] DEBUG S3AFileSystem - 
PUT start 57483314 bytes
2022-11-10 07:36:39.661 [s3a-transfer-shared-pool1-t1] DEBUG 
S3ABlockOutputStream - Closing 
org.apache.hadoop.fs.s3a.S3ADataBlocks$BlockUploadData@6bf27c5b
2022-11-10 07:36:39.661 [s3a-transfer-shared-pool1-t1] DEBUG 
S3ABlockOutputStream - Closing FileBlock{index=1, 
destFile=/mnt/var/lib/hadoop/tmp/s3a/s3ablock-0001-3171146831518189026.tmp, 
state=Upload, dataSize=57483314, limit=104857600}
2022-11-10 07:36:39.661 [s3a-transfer-shared-pool1-t1] DEBUG S3ADataBlocks - 
FileBlock{index=1, 
destFile=/mnt/var/lib/hadoop/tmp/s3a/s3ablock-0001-3171146831518189026.tmp, 
state=Upload, dataSize=57483314, limit=104857600}: entering state Closed
2022-11-10 07:36:39.661 [s3a-transfer-shared-pool1-t1] DEBUG S3ADataBlocks - 
Closed FileBlock{index=1, 
destFile=/mnt/var/lib/hadoop/tmp/s3a/s3ablock-0001-3171146831518189026.tmp, 
state=Closed, dataSize=57483314, limit=104857600}
2022-11-10 07:36:39.661 [s3a-transfer-shared-pool1-t1] DEBUG S3ADataBlocks - 
Closing FileBlock{index=1, 
destFile=/mnt/var/lib/hadoop/tmp/s3a/s3ablock-0001-3171146831518189026.tmp, 
state=Closed, dataSize=57483314, limit=104857600}
2022-11-10 07:36:39.661 [s3a-transfer-shared-pool1-t1] DEBUG S3ADataBlocks - 
block[1]: closeBlock()
2022-11-10 07:36:39.670 [0-0-0-writer] DEBUG WriteOperationHelper - Write to 
WriteOperationHelper {bucket=prod-lb-quote-data-opra} failed
java.io.IOException: regular upload failed: java.lang.NullPointerException
    at org.apache.hadoop.fs.s3a.S3AUtils.extractException(S3AUtils.java:303) 
~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.putObject(S3ABlockOutputStream.java:453)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.close(S3ABlockOutputStream.java:365)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:73)
 [hadoop-common-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:102) 
[hadoop-common-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.mapred.TextOutputFormat$LineRecordWriter.close(TextOutputFormat.java:99)
 [hadoop-mapreduce-client-core-3.2.1-amzn-4.jar:na]
    at 
com.alibaba.datax.plugin.writer.s3writer.writer.Text.close(Text.java:217) 
[s3writer-0.0.1-SNAPSHOT.jar:na]
    at 
com.alibaba.datax.plugin.writer.s3writer.S3Writer$Task.destroy(S3Writer.java:217)
 [s3writer-0.0.1-SNAPSHOT.jar:na]
    at 
com.alibaba.datax.core.taskgroup.runner.AbstractRunner.destroy(AbstractRunner.java:28)
 [AppMaster.jar:na]
    at 
com.alibaba.datax.core.taskgroup.runner.WriterRunner.run(WriterRunner.java:76) 
[AppMaster.jar:na]
    at java.lang.Thread.run(Thread.java:748) [na:1.8.0_312]
Caused by: java.lang.NullPointerException: null
    at 
org.apache.hadoop.fs.s3a.S3AFileSystem.incrementStatistic(S3AFileSystem.java:1189)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.S3AFileSystem.incrementStatistic(S3AFileSystem.java:1179)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.S3AFileSystem.incrementPutStartStatistics(S3AFileSystem.java:1649)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.S3AFileSystem.putObjectDirect(S3AFileSystem.java:1584) 
~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.WriteOperationHelper.lambda$putObject$5(WriteOperationHelper.java:430)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109) 
~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:265) 
~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:322) 
~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:261) 
~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:236) 
~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.WriteOperationHelper.retry(WriteOperationHelper.java:123)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.WriteOperationHelper.putObject(WriteOperationHelper.java:428)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.lambda$putObject$0(S3ABlockOutputStream.java:438)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.util.SemaphoredDelegatingExecutor$CallableWithPermitRelease.call(SemaphoredDelegatingExecutor.java:219)
 ~[hadoop-common-3.2.1-amzn-4.jar:na]
    at 
org.apache.hadoop.util.SemaphoredDelegatingExecutor$CallableWithPermitRelease.call(SemaphoredDelegatingExecutor.java:219)
 ~[hadoop-common-3.2.1-amzn-4.jar:na]
    at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_312]
    at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
~[na:1.8.0_312]
    at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
~[na:1.8.0_312]
    ... 1 common frames omitted
2022-11-10 07:36:39.670 [0-0-0-writer] DEBUG S3ABlockOutputStream - Closing 
FileBlock{index=1, 
destFile=/mnt/var/lib/hadoop/tmp/s3a/s3ablock-0001-3171146831518189026.tmp, 
state=Closed, dataSize=57483314, limit=104857600}
2022-11-10 07:36:39.670 [0-0-0-writer] DEBUG S3ABlockOutputStream - Closing 
org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory@24d984d6
2022-11-10 07:36:39.670 [0-0-0-writer] DEBUG S3ABlockOutputStream - Statistics: 
OutputStreamStatistics{blocksSubmitted=1, blocksInQueue=1, blocksActive=0, 
blockUploadsCompleted=0, blockUploadsFailed=0, bytesPendingUpload=57483314, 
bytesUploaded=0, blocksAllocated=1, blocksReleased=1, 
blocksActivelyAllocated=0, exceptionsInMultipartFinalize=0, transferDuration=0 
ms, queueDuration=0 ms, averageQueueTime=0 ms, totalUploadDuration=0 ms, 
effectiveBandwidth=0.0 bytes/s}
2022-11-10 07:36:39.670 [0-0-0-writer] DEBUG S3ABlockOutputStream - Closing 
OutputStreamStatistics{blocksSubmitted=1, blocksInQueue=1, blocksActive=0, 
blockUploadsCompleted=0, blockUploadsFailed=0, bytesPendingUpload=57483314, 
bytesUploaded=0, blocksAllocated=1, blocksReleased=1, 
blocksActivelyAllocated=0, exceptionsInMultipartFinalize=0, transferDuration=0 
ms, queueDuration=0 ms, averageQueueTime=0 ms, totalUploadDuration=0 ms, 
effectiveBandwidth=0.0 bytes/s}
2022-11-10 07:36:39.670 [0-0-0-writer] WARN  S3AInstrumentation - Closing 
output stream statistics while data is still marked as pending upload in 
OutputStreamStatistics{blocksSubmitted=1, blocksInQueue=1, blocksActive=0, 
blockUploadsCompleted=0, blockUploadsFailed=0, bytesPendingUpload=57483314, 
bytesUploaded=0, blocksAllocated=1, blocksReleased=1, 
blocksActivelyAllocated=0, exceptionsInMultipartFinalize=0, transferDuration=0 
ms, queueDuration=0 ms, averageQueueTime=0 ms, totalUploadDuration=0 ms, 
effectiveBandwidth=0.0 bytes/s}{code}


was (Author: JIRAUSER292033):
I use TextOutputFormat getRecordWriter encounter similar mistakes when 
uploading files.
{code:java}
//log

2022-11-10 07:36:39.657 [0-0-0-writer] DEBUG S3ABlockOutputStream - 
S3ABlockOutputStream{WriteOperationHelper {bucket=prod-lb-quote-data-opra}, 
blockSize=104857600, activeBlock=FileBlock{index=1, 
destFile=/mnt/var/lib/hadoop/tmp/s3a/s3ablock-0001-3171146831518189026.tmp, 
state=Writing, dataSize=57483314, limit=104857600}}: Closing block #1: current 
block= FileBlock{index=1, 
destFile=/mnt/var/lib/hadoop/tmp/s3a/s3ablock-0001-3171146831518189026.tmp, 
state=Writing, dataSize=57483314, limit=104857600} 2022-11-10 07:36:39.657 
[0-0-0-writer] DEBUG S3ABlockOutputStream - Executing regular upload for 
WriteOperationHelper {bucket=prod-lb-quote-data-opra} 2022-11-10 07:36:39.657 
[0-0-0-writer] DEBUG S3ADataBlocks - Start datablock[1] upload 2022-11-10 
07:36:39.657 [0-0-0-writer] DEBUG S3ADataBlocks - FileBlock{index=1, 
destFile=/mnt/var/lib/hadoop/tmp/s3a/s3ablock-0001-3171146831518189026.tmp, 
state=Writing, dataSize=57483314, limit=104857600}: entering state Upload 
2022-11-10 07:36:39.660 [0-0-0-writer] DEBUG S3ABlockOutputStream - Clearing 
active block 2022-11-10 07:36:39.661 [s3a-transfer-shared-pool1-t1] DEBUG 
S3AFileSystem - PUT 57483314 bytes to 
data_studio/lb/statistic_financial_tmp/20221110_073635_356_2b6a2aa765db483f9fd591f00c2a6826.text
 2022-11-10 07:36:39.661 [s3a-transfer-shared-pool1-t1] DEBUG S3AFileSystem - 
PUT start 57483314 bytes 2022-11-10 07:36:39.661 [s3a-transfer-shared-pool1-t1] 
DEBUG S3ABlockOutputStream - Closing 
org.apache.hadoop.fs.s3a.S3ADataBlocks$BlockUploadData@6bf27c5b 2022-11-10 
07:36:39.661 [s3a-transfer-shared-pool1-t1] DEBUG S3ABlockOutputStream - 
Closing FileBlock{index=1, 
destFile=/mnt/var/lib/hadoop/tmp/s3a/s3ablock-0001-3171146831518189026.tmp, 
state=Upload, dataSize=57483314, limit=104857600} 2022-11-10 07:36:39.661 
[s3a-transfer-shared-pool1-t1] DEBUG S3ADataBlocks - FileBlock{index=1, 
destFile=/mnt/var/lib/hadoop/tmp/s3a/s3ablock-0001-3171146831518189026.tmp, 
state=Upload, dataSize=57483314, limit=104857600}: entering state Closed 
2022-11-10 07:36:39.661 [s3a-transfer-shared-pool1-t1] DEBUG S3ADataBlocks - 
Closed FileBlock{index=1, 
destFile=/mnt/var/lib/hadoop/tmp/s3a/s3ablock-0001-3171146831518189026.tmp, 
state=Closed, dataSize=57483314, limit=104857600} 2022-11-10 07:36:39.661 
[s3a-transfer-shared-pool1-t1] DEBUG S3ADataBlocks - Closing FileBlock{index=1, 
destFile=/mnt/var/lib/hadoop/tmp/s3a/s3ablock-0001-3171146831518189026.tmp, 
state=Closed, dataSize=57483314, limit=104857600} 2022-11-10 07:36:39.661 
[s3a-transfer-shared-pool1-t1] DEBUG S3ADataBlocks - block[1]: closeBlock() 
2022-11-10 07:36:39.670 [0-0-0-writer] DEBUG WriteOperationHelper - Write to 
WriteOperationHelper {bucket=prod-lb-quote-data-opra} failed 
java.io.IOException: regular upload failed: java.lang.NullPointerException     
at org.apache.hadoop.fs.s3a.S3AUtils.extractException(S3AUtils.java:303) 
~[hadoop-aws-3.2.1-amzn-4.jar:na]     at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.putObject(S3ABlockOutputStream.java:453)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]     at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.close(S3ABlockOutputStream.java:365)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]     at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:73)
 [hadoop-common-3.2.1-amzn-4.jar:na]     at 
org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:102) 
[hadoop-common-3.2.1-amzn-4.jar:na]     at 
org.apache.hadoop.mapred.TextOutputFormat$LineRecordWriter.close(TextOutputFormat.java:99)
 [hadoop-mapreduce-client-core-3.2.1-amzn-4.jar:na]     at 
com.alibaba.datax.plugin.writer.s3writer.writer.Text.close(Text.java:217) 
[s3writer-0.0.1-SNAPSHOT.jar:na]     at 
com.alibaba.datax.plugin.writer.s3writer.S3Writer$Task.destroy(S3Writer.java:217)
 [s3writer-0.0.1-SNAPSHOT.jar:na]     at 
com.alibaba.datax.core.taskgroup.runner.AbstractRunner.destroy(AbstractRunner.java:28)
 [AppMaster.jar:na]     at 
com.alibaba.datax.core.taskgroup.runner.WriterRunner.run(WriterRunner.java:76) 
[AppMaster.jar:na]     at java.lang.Thread.run(Thread.java:748) [na:1.8.0_312] 
Caused by: java.lang.NullPointerException: null     at 
org.apache.hadoop.fs.s3a.S3AFileSystem.incrementStatistic(S3AFileSystem.java:1189)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]     at 
org.apache.hadoop.fs.s3a.S3AFileSystem.incrementStatistic(S3AFileSystem.java:1179)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]     at 
org.apache.hadoop.fs.s3a.S3AFileSystem.incrementPutStartStatistics(S3AFileSystem.java:1649)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]     at 
org.apache.hadoop.fs.s3a.S3AFileSystem.putObjectDirect(S3AFileSystem.java:1584) 
~[hadoop-aws-3.2.1-amzn-4.jar:na]     at 
org.apache.hadoop.fs.s3a.WriteOperationHelper.lambda$putObject$5(WriteOperationHelper.java:430)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]     at 
org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109) 
~[hadoop-aws-3.2.1-amzn-4.jar:na]     at 
org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:265) 
~[hadoop-aws-3.2.1-amzn-4.jar:na]     at 
org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:322) 
~[hadoop-aws-3.2.1-amzn-4.jar:na]     at 
org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:261) 
~[hadoop-aws-3.2.1-amzn-4.jar:na]     at 
org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:236) 
~[hadoop-aws-3.2.1-amzn-4.jar:na]     at 
org.apache.hadoop.fs.s3a.WriteOperationHelper.retry(WriteOperationHelper.java:123)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]     at 
org.apache.hadoop.fs.s3a.WriteOperationHelper.putObject(WriteOperationHelper.java:428)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]     at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.lambda$putObject$0(S3ABlockOutputStream.java:438)
 ~[hadoop-aws-3.2.1-amzn-4.jar:na]     at 
org.apache.hadoop.util.SemaphoredDelegatingExecutor$CallableWithPermitRelease.call(SemaphoredDelegatingExecutor.java:219)
 ~[hadoop-common-3.2.1-amzn-4.jar:na]     at 
org.apache.hadoop.util.SemaphoredDelegatingExecutor$CallableWithPermitRelease.call(SemaphoredDelegatingExecutor.java:219)
 ~[hadoop-common-3.2.1-amzn-4.jar:na]     at 
java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_312]     at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
~[na:1.8.0_312]     at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
~[na:1.8.0_312]     ... 1 common frames omitted 2022-11-10 07:36:39.670 
[0-0-0-writer] DEBUG S3ABlockOutputStream - Closing FileBlock{index=1, 
destFile=/mnt/var/lib/hadoop/tmp/s3a/s3ablock-0001-3171146831518189026.tmp, 
state=Closed, dataSize=57483314, limit=104857600} 2022-11-10 07:36:39.670 
[0-0-0-writer] DEBUG S3ABlockOutputStream - Closing 
org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory@24d984d6 2022-11-10 
07:36:39.670 [0-0-0-writer] DEBUG S3ABlockOutputStream - Statistics: 
OutputStreamStatistics{blocksSubmitted=1, blocksInQueue=1, blocksActive=0, 
blockUploadsCompleted=0, blockUploadsFailed=0, bytesPendingUpload=57483314, 
bytesUploaded=0, blocksAllocated=1, blocksReleased=1, 
blocksActivelyAllocated=0, exceptionsInMultipartFinalize=0, transferDuration=0 
ms, queueDuration=0 ms, averageQueueTime=0 ms, totalUploadDuration=0 ms, 
effectiveBandwidth=0.0 bytes/s} 2022-11-10 07:36:39.670 [0-0-0-writer] DEBUG 
S3ABlockOutputStream - Closing OutputStreamStatistics{blocksSubmitted=1, 
blocksInQueue=1, blocksActive=0, blockUploadsCompleted=0, blockUploadsFailed=0, 
bytesPendingUpload=57483314, bytesUploaded=0, blocksAllocated=1, 
blocksReleased=1, blocksActivelyAllocated=0, exceptionsInMultipartFinalize=0, 
transferDuration=0 ms, queueDuration=0 ms, averageQueueTime=0 ms, 
totalUploadDuration=0 ms, effectiveBandwidth=0.0 bytes/s} 2022-11-10 
07:36:39.670 [0-0-0-writer] WARN  S3AInstrumentation - Closing output stream 
statistics while data is still marked as pending upload in 
OutputStreamStatistics{blocksSubmitted=1, blocksInQueue=1, blocksActive=0, 
blockUploadsCompleted=0, blockUploadsFailed=0, bytesPendingUpload=57483314, 
bytesUploaded=0, blocksAllocated=1, blocksReleased=1, 
blocksActivelyAllocated=0, exceptionsInMultipartFinalize=0, transferDuration=0 
ms, queueDuration=0 ms, averageQueueTime=0 ms, totalUploadDuration=0 ms, 
effectiveBandwidth=0.0 bytes/s}
{code}

> S3AInstrumentation Closing output stream statistics while data is still 
> marked as pending upload in OutputStreamStatistics
> --------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-17847
>                 URL: https://issues.apache.org/jira/browse/HADOOP-17847
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs/s3
>    Affects Versions: 3.2.1
>         Environment: hadoop: 3.2.1
> spark: 3.0.2
> k8s server version: 1.18
> aws.java.sdk.bundle.version:1.11.1033
>            Reporter: Li Rong
>            Priority: Major
>         Attachments: logs.txt
>
>
> When using hadoop s3a file upload for spark event Logs, the logs were queued 
> up and not uploaded before the process is shut down:
> {code:java}
> // 21/08/13 12:22:39 WARN ExecutorPodsWatchSnapshotSource: Kubernetes client 
> has been closed (this is expected if the application is shutting down.)
> 21/08/13 12:22:39 WARN S3AInstrumentation: Closing output stream statistics 
> while data is still marked as pending upload in 
> OutputStreamStatistics{blocksSubmitted=1, blocksInQueue=1, blocksActive=0, 
> blockUploadsCompleted=0, blockUploadsFailed=0, bytesPendingUpload=106716, 
> bytesUploaded=0, blocksAllocated=1, blocksReleased=1, 
> blocksActivelyAllocated=0, exceptionsInMultipartFinalize=0, 
> transferDuration=0 ms, queueDuration=0 ms, averageQueueTime=0 ms, 
> totalUploadDuration=0 ms, effectiveBandwidth=0.0 bytes/s}{code}
> details see logs attached



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to