[ https://issues.apache.org/jira/browse/HADOOP-11938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14545154#comment-14545154 ]
Yi Liu commented on HADOOP-11938: --------------------------------- Looks good now, one nit, +1 after addressing in TestRawCoderBase.java {code} Assert.fail("Encoding test with bad input passed"); {code} We should write "Encoding test with bad input should fail". You write oppositely. Same as few other Assert.fail. Furthermore, we need to fix the Jenkins warnings (release audit/checkstyle/whitespace) if they are related to this patch. > Fix ByteBuffer version encode/decode API of raw erasure coder > ------------------------------------------------------------- > > Key: HADOOP-11938 > URL: https://issues.apache.org/jira/browse/HADOOP-11938 > Project: Hadoop Common > Issue Type: Sub-task > Components: io > Reporter: Kai Zheng > Assignee: Kai Zheng > Attachments: HADOOP-11938-HDFS-7285-v1.patch, > HADOOP-11938-HDFS-7285-v2.patch, HADOOP-11938-HDFS-7285-v3.patch, > HADOOP-11938-HDFS-7285-workaround.patch > > > While investigating a test failure in {{TestRecoverStripedFile}}, one issue > in raw erasrue coder, caused by an optimization in below codes. It assumes > the heap buffer backed by the bytes array available for reading or writing > always starts with zero and takes the whole space. > {code} > protected static byte[][] toArrays(ByteBuffer[] buffers) { > byte[][] bytesArr = new byte[buffers.length][]; > ByteBuffer buffer; > for (int i = 0; i < buffers.length; i++) { > buffer = buffers[i]; > if (buffer == null) { > bytesArr[i] = null; > continue; > } > if (buffer.hasArray()) { > bytesArr[i] = buffer.array(); > } else { > throw new IllegalArgumentException("Invalid ByteBuffer passed, " + > "expecting heap buffer"); > } > } > return bytesArr; > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)