[ 
https://issues.apache.org/jira/browse/HADOOP-17415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18039333#comment-18039333
 ] 

ASF GitHub Bot commented on HADOOP-17415:
-----------------------------------------

monthonk opened a new pull request, #3939:
URL: https://github.com/apache/hadoop/pull/3939

   ### Description of PR
   
   As part of all the openFile work, knowing full length of an object allows 
for a HEAD to be skipped. But: code knowing only the splits don't know the 
final length of the file.
   
   If the content-range header is used, then as soon as a single GET is 
initiated against an object, if the field is returned then we can update the 
length of the S3A stream to its real/final length
   
   * Skip file status probe on openFile. Content length will be updated on 
first read.
   * As a side effect, modification time, etag and version id also be unknown 
as there is no probe on openFile.
   * Allow content length to be negative value, it means content length is 
unknown.
   * On read file, use information from Content-Range header to update content 
length.
   * If out of range exception occurs on read, ActualObjectSize from exception 
details can be used to update content length.
   
   ### How was this patch tested?
   
   Tested in `eu-west-1` with `mvn -Dparallel-tests -DtestsThreadCount=16 clean 
verify` (AP tests failed from previous SDK upgrade)
   
   ```
   [INFO] Results:
   [INFO] 
   [ERROR] Errors: 
   [ERROR]   
ITestS3ABucketExistence.testAccessPointProbingV2:171->expectUnknownStore:103->lambda$testAccessPointProbingV2$12:172
 » IllegalArgument
   [ERROR]   
ITestS3ABucketExistence.testAccessPointRequired:188->expectUnknownStore:103->lambda$testAccessPointRequired$14:189
 » IllegalArgument
   [INFO] 
   [ERROR] Tests run: 1063, Failures: 0, Errors: 2, Skipped: 186
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 108, Failures: 0, Errors: 0, Skipped: 68
   ```
   
   ### For code changes:
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [x] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> Use S3 content-range header to update length of an object during reads
> ----------------------------------------------------------------------
>
>                 Key: HADOOP-17415
>                 URL: https://issues.apache.org/jira/browse/HADOOP-17415
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.3.0
>            Reporter: Steve Loughran
>            Assignee: Monthon Klongklaew
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 50m
>  Remaining Estimate: 0h
>
> As part of all the openFile work, knowing full length of an object allows for 
> a HEAD to be skipped. But: code knowing only the splits don't know the final 
> length of the file.
> If the content-range header is used, then as soon as a single GET is 
> initiated against an object, if the field is returned then we can update the 
> length of the S3A stream to its real/final length
> Also: when any input stream fails with an EOF exception, we can distinguish 
> stream-interrupted from "no, too far"



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to