[ 
https://issues.apache.org/jira/browse/HDFS-6560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14041477#comment-14041477
 ] 

Colin Patrick McCabe commented on HDFS-6560:
--------------------------------------------

{code}
+  public static void verifyChunkedSumsByteArray(int bytesPerSum,
+      int checksumType, byte[] sums, int sumsOffset, byte[] data,
+      int dataOffset, int dataLength, String fileName, long basePos)
+      throws ChecksumException {
+    nativeVerifyChunkedSumsByteArray(bytesPerSum, checksumType,
+        sums, sumsOffset,
+        data, dataOffset, dataLength,
+        fileName, basePos);
+  }
{code}
What's the purpose of this wrapper function?  It just passes all its arguments 
directly to the other function. Public functions can have the native annotation 
too.

{code}
+      sums_addr = (*env)->GetPrimitiveArrayCritical(env, j_sums, NULL);
+      data_addr = (*env)->GetPrimitiveArrayCritical(env, j_data, NULL);
+
+      if (unlikely(!sums_addr || !data_addr)) {
+        THROW(env, "java/lang/OutOfMemoryError",
+          "not enough memory for byte arrays in JNI code");
+        return;
+      }
{code}

This is going to leak memory if {{GetPrimitiveArrayCritical}} succeeds for 
{{sums_addr}} but not for {{data_addr}}.

> Byte array native checksumming on DN side
> -----------------------------------------
>
>                 Key: HDFS-6560
>                 URL: https://issues.apache.org/jira/browse/HDFS-6560
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: datanode, hdfs-client, performance
>            Reporter: James Thomas
>            Assignee: James Thomas
>         Attachments: HDFS-3528.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to