[ 
https://issues.apache.org/jira/browse/HADOOP-18695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17714975#comment-17714975
 ] 

ASF GitHub Bot commented on HADOOP-18695:
-----------------------------------------

steveloughran commented on PR #5548:
URL: https://github.com/apache/hadoop/pull/5548#issuecomment-1517730579

   
   Tested s3 london; I'm now playing with both prefetch and non-prefetching 
test runs.
   At least with my local test setup, its a bit slower with -Dprefetch, but 
remember these aren't benchmarks, these are unit tests where the connections 
are all very short lived, the FS instances don't even have time to build up 
that http pool.
   
   ### prefetch
   ```
   
   time mvit -Dparallel-tests -DtestsThreadCount=8 -Dscale 
-Dfs.s3a.scale.test.huge.filesize=100M -Dprefetch
   
   
   ________________________________________________________
   Executed in   16.32 mins    fish           external
      usr time   20.57 mins    0.14 millis   20.57 mins
      sys time    3.24 mins    1.70 millis    3.24 mins
   
   ```
   one failure there, which is HADOOP-18697
   ```
   [ERROR] 
testRandomReadLargeFile(org.apache.hadoop.fs.s3a.ITestS3APrefetchingInputStream)
  Time elapsed: 25.738 s  <<< FAILURE!
   org.junit.ComparisonFailure: [Gauge named stream_read_blocks_in_cache with 
expected value 0] expected:<[0]L> but was:<[1]L>
           at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
Method)
           at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
           at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
           at 
org.apache.hadoop.fs.statistics.IOStatisticAssertions.verifyStatisticValue(IOStatisticAssertions.java:257)
           at 
org.apache.hadoop.fs.statistics.IOStatisticAssertions.verifyStatisticGaugeValue(IOStatisticAssertions.java:190)
           at 
org.apache.hadoop.fs.s3a.ITestS3APrefetchingInputStream.testRandomReadLargeFile(ITestS3APrefetchingInputStream.java:210)
   ```
   
   ## no prefetch
   ```
   
   time mvit -Dparallel-tests -DtestsThreadCount=8 -Dscale 
-Dfs.s3a.scale.test.huge.filesize=100M
   
   ________________________________________________________
   Executed in   15.91 mins    fish           external
      usr time   20.34 mins    0.15 millis   20.34 mins
      sys time    3.24 mins    1.90 millis    3.24 mins
   ```
   
   




> S3A: reject multipart copy requests when disabled
> -------------------------------------------------
>
>                 Key: HADOOP-18695
>                 URL: https://issues.apache.org/jira/browse/HADOOP-18695
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: fs/s3
>    Affects Versions: 3.4.0
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>            Priority: Minor
>              Labels: pull-request-available
>
> follow-on to HADOOP-18637 and support for huge file uploads with stores which 
> don't support MPU.
> * prevent use of API against any s3 store when disabled, using logging 
> auditor to reject it
> * tests to verify rename of huge files still works (by setting large part 
> size)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to