Yi Liu created HADOOP-11039: ------------------------------- Summary: ByteBufferReadable API doc is inconsistent with the implementations. Key: HADOOP-11039 URL: https://issues.apache.org/jira/browse/HADOOP-11039 Project: Hadoop Common Issue Type: Bug Components: documentation Reporter: Yi Liu Assignee: Yi Liu Priority: Minor
In {{ByteBufferReadable}}, API doc of {{int read(ByteBuffer buf)}} says: {quote} After a successful call, buf.position() and buf.limit() should be unchanged, and therefore any data can be immediately read from buf. buf.mark() may be cleared or updated. {quote} {quote} @param buf the ByteBuffer to receive the results of the read operation. Up to buf.limit() - buf.position() bytes may be read. {quote} But actually the implementations (e.g. {{DFSInputStream}}, {{RemoteBlockReader2}}) would be: *Upon return, buf.position() will be advanced by the number of bytes read.* code implementation of {{RemoteBlockReader2}} is as following: {code} @Override public int read(ByteBuffer buf) throws IOException { if (curDataSlice == null || curDataSlice.remaining() == 0 && bytesNeededToFinish > 0) { readNextPacket(); } if (curDataSlice.remaining() == 0) { // we're at EOF now return -1; } int nRead = Math.min(curDataSlice.remaining(), buf.remaining()); ByteBuffer writeSlice = curDataSlice.duplicate(); writeSlice.limit(writeSlice.position() + nRead); buf.put(writeSlice); curDataSlice.position(writeSlice.position()); return nRead; } {code} This description is very important and will guide user how to use this API, and all the implementations should keep the same behavior. We should fix the javadoc. -- This message was sent by Atlassian JIRA (v6.3.4#6332)