[ 
https://issues.apache.org/jira/browse/HDFS-6097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13935111#comment-13935111
 ] 

Hudson commented on HDFS-6097:
------------------------------

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #1726 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1726/])
HDFS-6097. Zero-copy reads are incorrectly disabled on file offsets above 2GB 
(cmccabe) (cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1577350)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/ShortCircuitReplica.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestEnhancedByteBufferAccess.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/BlockReaderTestUtil.java


> zero-copy reads are incorrectly disabled on file offsets above 2GB
> ------------------------------------------------------------------
>
>                 Key: HDFS-6097
>                 URL: https://issues.apache.org/jira/browse/HDFS-6097
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs-client
>    Affects Versions: 2.4.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>             Fix For: 2.4.0
>
>         Attachments: HDFS-6097.003.patch, HDFS-6097.004.patch, 
> HDFS-6097.005.patch
>
>
> Zero-copy reads are incorrectly disabled on file offsets above 2GB due to 
> some code that is supposed to disable zero-copy reads on offsets in block 
> files greater than 2GB (because MappedByteBuffer segments are limited to that 
> size).



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to