[ 
https://issues.apache.org/jira/browse/HADOOP-11867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16705141#comment-16705141
 ] 

Owen O'Malley commented on HADOOP-11867:
----------------------------------------

I'd like to propose the following API:

{code:java}
package org.apache.hadoop.fs;
/**
 * A range of bytes from a file.
 */
public static class FileRange {
  public final long offset;
  public final int length; // max length is 2^31 because of Java arrays
  public ByteBuffer buffer;
  public DiskRange(long offset, int length, ByteBuffer buffer) {
    this.offset = offset;
    this.length = length;
    this.buffer = buffer;
  }
}

public class FSDataInputStream ... {
  ...
  /**
   * Perform an asynchronous read of the file with multiple ranges. This call
   * will return immediately and return futures that will contain the data
   * once it is read. The order of the physical reads is an implementation
   * detail of this method. Multiple requests may be converted into a single
   * read.
   *
   * If any ranges do not have a buffer, an array-based one of the appropriate
   * size will be created for it.
   * @param ranges the list of disk ranges to read
   * @return for each range, the future filled byte buffer will be returned.
   * @throws IOException if the file is not available
   */
  public CompletableFuture<FileRange>[] readAsync(List<FileRange> ranges
                                                                                
     ) throws IOException { ... }
  ...
}
{code}

FSDataInputStream will have a default implementation, but file systems will be 
able to create a more optimized solution for their files.

Thoughts?

> FS API: Add a high-performance vectored Read to FSDataInputStream API
> ---------------------------------------------------------------------
>
>                 Key: HADOOP-11867
>                 URL: https://issues.apache.org/jira/browse/HADOOP-11867
>             Project: Hadoop Common
>          Issue Type: New Feature
>          Components: hdfs-client
>    Affects Versions: 3.0.0
>            Reporter: Gopal V
>            Assignee: Owen O'Malley
>            Priority: Major
>              Labels: performance
>
> The most significant way to read from a filesystem in an efficient way is to 
> let the FileSystem implementation handle the seek behaviour underneath the 
> API to be the most efficient as possible.
> A better approach to the seek problem is to provide a sequence of read 
> locations as part of a single call, while letting the system schedule/plan 
> the reads ahead of time.
> This is exceedingly useful for seek-heavy readers on HDFS, since this allows 
> for potentially optimizing away the seek-gaps within the FSDataInputStream 
> implementation.
> For seek+read systems with even more latency than locally-attached disks, 
> something like a {{readFully(long[] offsets, ByteBuffer[] chunks)}} would 
> take of the seeks internally while reading chunk.remaining() bytes into each 
> chunk (which may be {{slice()}}ed off a bigger buffer).
> The base implementation can stub in this as a sequence of seeks + read() into 
> ByteBuffers, without forcing each FS implementation to override this in any 
> way.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to