[ 
https://issues.apache.org/jira/browse/HADOOP-15063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wujinhu updated HADOOP-15063:
-----------------------------
    Description: 
IOException will be thrown in this case
1. set part size = n(102400)
2. assume current position = 0, then partRemaining = 102400
3. we call seek(pos = 101802), with pos > position && pos < position + 
partRemaining, so it will skip pos - position bytes, but partRemaining remains 
the same
4. if we read bytes more than n - pos, it will throw IOException.

Current code:
{code:java}
@Override
  public synchronized void seek(long pos) throws IOException {
    checkNotClosed();
    if (position == pos) {
      return;
    } else if (pos > position && pos < position + partRemaining) {
      AliyunOSSUtils.skipFully(wrappedStream, pos - position);
*      // we need update partRemaining here
*      position = pos;
    } else {
      reopen(pos);
    }
  }
{code}

Logs:
java.io.IOException: Failed to read from stream. Remaining:101802

        at 
org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.read(AliyunOSSInputStream.java:182)
        at org.apache.hadoop.fs.FSInputStream.read(FSInputStream.java:75)
        at 
org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:92)

How to re-produce:
1. create a file with 10MB size
2. 
{code:java}
int seekTimes = 150;
for (int i = 0; i < seekTimes; i++) {
      long pos = size / (seekTimes - i) - 1;
      LOG.info("begin seeking for pos: " + pos);
      byte []buf = new byte[1024];
      instream.read(pos, buf, 0, 1024);
}
{code}


  was:
IOException will be thrown in this case
1. set part size = n(102400)
2. assume current position = 0, then partRemaining = 102400
3. we call seek(pos = 101802), with pos > position && pos < position + 
partRemaining, so it will skip pos - position bytes, but partRemaining remains 
the same
4. if we read bytes more than n - pos, it will throw IOException.

Current code:
{code:java}
@Override
  public synchronized void seek(long pos) throws IOException {
    checkNotClosed();
    if (position == pos) {
      return;
    } else if (pos > position && pos < position + partRemaining) {
      AliyunOSSUtils.skipFully(wrappedStream, pos - position);
*{color:#d04437}      // we need update partRemaining here
{color}*      position = pos;
    } else {
      reopen(pos);
    }
  }
{code}

Logs:
java.io.IOException: Failed to read from stream. Remaining:101802

        at 
org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.read(AliyunOSSInputStream.java:182)
        at org.apache.hadoop.fs.FSInputStream.read(FSInputStream.java:75)
        at 
org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:92)

How to re-produce:
1. create a file with 10MB size
2. 
{code:java}
int seekTimes = 150;
for (int i = 0; i < seekTimes; i++) {
      long pos = size / (seekTimes - i) - 1;
      LOG.info("begin seeking for pos: " + pos);
      byte []buf = new byte[1024];
      instream.read(pos, buf, 0, 1024);
}
{code}



> IOException will be thrown when read from Aliyun OSS
> ----------------------------------------------------
>
>                 Key: HADOOP-15063
>                 URL: https://issues.apache.org/jira/browse/HADOOP-15063
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs/oss
>    Affects Versions: 3.0.0-alpha2
>            Reporter: wujinhu
>            Priority: Critical
>
> IOException will be thrown in this case
> 1. set part size = n(102400)
> 2. assume current position = 0, then partRemaining = 102400
> 3. we call seek(pos = 101802), with pos > position && pos < position + 
> partRemaining, so it will skip pos - position bytes, but partRemaining 
> remains the same
> 4. if we read bytes more than n - pos, it will throw IOException.
> Current code:
> {code:java}
> @Override
>   public synchronized void seek(long pos) throws IOException {
>     checkNotClosed();
>     if (position == pos) {
>       return;
>     } else if (pos > position && pos < position + partRemaining) {
>       AliyunOSSUtils.skipFully(wrappedStream, pos - position);
> *      // we need update partRemaining here
> *      position = pos;
>     } else {
>       reopen(pos);
>     }
>   }
> {code}
> Logs:
> java.io.IOException: Failed to read from stream. Remaining:101802
>       at 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.read(AliyunOSSInputStream.java:182)
>       at org.apache.hadoop.fs.FSInputStream.read(FSInputStream.java:75)
>       at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:92)
> How to re-produce:
> 1. create a file with 10MB size
> 2. 
> {code:java}
> int seekTimes = 150;
> for (int i = 0; i < seekTimes; i++) {
>       long pos = size / (seekTimes - i) - 1;
>       LOG.info("begin seeking for pos: " + pos);
>       byte []buf = new byte[1024];
>       instream.read(pos, buf, 0, 1024);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to