[ 
https://issues.apache.org/jira/browse/HADOOP-15245?focusedWorklogId=738073&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-738073
 ]

ASF GitHub Bot logged work on HADOOP-15245:
-------------------------------------------

                Author: ASF GitHub Bot
            Created on: 08/Mar/22 11:10
            Start Date: 08/Mar/22 11:10
    Worklog Time Spent: 10m 
      Work Description: dannycjones edited a comment on pull request #3927:
URL: https://github.com/apache/hadoop/pull/3927#issuecomment-1061663275


   @aajisaka is it possible to retry a failed Yetus run due to timeout? I'm 
assuming it is not a PR issue. Any recommendation here?
   
   The error is:
   
   ```
   [2022-03-04T13:22:23.739Z] /usr/bin/mvn --batch-mode 
-Dmaven.repo.local=/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-3927/yetus-m2/hadoop-trunk-patch-1
 -Ptest-patch -DskipTests -Pnative -Drequire.fuse -Drequire.openssl 
-Drequire.snappy -Drequire.valgrind -Drequire.zstd -Drequire.test.libhadoop 
-Pyarn-ui clean test-compile -DskipTests=true > 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-3927/ubuntu-focal/out/patch-compile-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt
 2>&1
   
   [2022-03-04T13:29:39.581Z] time="2022-03-04T13:29:33Z" level=error 
msg="error waiting for container: unexpected EOF"
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 738073)
    Time Spent: 1h 50m  (was: 1h 40m)

> S3AInputStream.skip() to use lazy seek
> --------------------------------------
>
>                 Key: HADOOP-15245
>                 URL: https://issues.apache.org/jira/browse/HADOOP-15245
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.1.0
>            Reporter: Steve Loughran
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> the default skip() does a read and discard of all bytes, no matter how far 
> ahead the skip is. This is very inefficient if the skip() is being done on 
> S3A random IO, though exactly what to do when in sequential mode.
> Proposed: 
> * add an optimized version of S3AInputStream.skip() which does a lazy seek, 
> which itself will decided when to skip() vs issue a new GET.
> * add some more instrumentation to measure how often this gets used



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to