[ 
https://issues.apache.org/jira/browse/HADOOP-9601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13676059#comment-13676059
 ] 

Todd Lipcon commented on HADOOP-9601:
-------------------------------------

Another thing which I think will have to be addressed before commit: I don't 
think it's legal to use the THROW(...) macro inside of the array critical 
region, since it allocates an object on the Java heap. This can create a 
deadlock (eg that allocation could want to trigger a GC, but GC is blocked 
because of the critical region). In the exception cases, we'll have to store 
the exception message and type info on the C heap or stack, unlock the array, 
and then throw the exception outside the critical region.

I think if you run the tests with -Xcheck:jni it should catch these cases where 
other JNI calls are being used inside the critical region
                
> Support native CRC on byte arrays
> ---------------------------------
>
>                 Key: HADOOP-9601
>                 URL: https://issues.apache.org/jira/browse/HADOOP-9601
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: performance, util
>    Affects Versions: 3.0.0
>            Reporter: Todd Lipcon
>         Attachments: HADOOP-9601-WIP-01.patch
>
>
> When we first implemented the Native CRC code, we only did so for direct byte 
> buffers, because these correspond directly to native heap memory and thus 
> make it easy to access via JNI. We'd generally assumed that accessing byte[] 
> arrays from JNI was not efficient enough, but now that I know more about JNI 
> I don't think that's true -- we just need to make sure that the critical 
> sections where we lock the buffers are short.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to