[ https://issues.apache.org/jira/browse/HBASE-14940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Anoop Sam John updated HBASE-14940: ----------------------------------- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 1.0.4 0.98.17 1.1.3 1.3.0 1.2.0 2.0.0 Status: Resolved (was: Patch Available) Pushed to 0.98+ versions. Thanks all for the reviews > Make our unsafe based ops more safe > ----------------------------------- > > Key: HBASE-14940 > URL: https://issues.apache.org/jira/browse/HBASE-14940 > Project: HBase > Issue Type: Bug > Reporter: Anoop Sam John > Assignee: Anoop Sam John > Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4 > > Attachments: HBASE-14940.patch, HBASE-14940_branch-1.patch, > HBASE-14940_branch-1.patch, HBASE-14940_branch-1.patch, > HBASE-14940_branch-1.patch, HBASE-14940_v2.patch > > > Thanks for the nice findings [~ikeda] > This jira solves 3 issues with Unsafe operations and ByteBufferUtils > 1. We can do sun unsafe based reads and writes iff unsafe package is > available and underlying platform is having unaligned-access capability. But > we were missing the second check > 2. Java NIO is doing a chunk based copy while doing Unsafe copyMemory. The > max chunk size is 1 MB. This is done for "A limit is imposed to allow for > safepoint polling during a large copy" as mentioned in comments in Bits.java. > We are also going to do same way > 3. In ByteBufferUtils, when Unsafe is not available and ByteBuffers are off > heap, we were doing byte by byte operation (read/copy). We can avoid this and > do better way. -- This message was sent by Atlassian JIRA (v6.3.4#6332)