[ https://issues.apache.org/jira/browse/HBASE-14463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Yu Li updated HBASE-14463: -------------------------- Attachment: TestBucketCache-new_with_IdReadWriteLock.png TestBucketCache-new_with_IdLock.png HBASE-14463_v5.patch New patch adds an additional thread to evict the single-key block every 10ms in CacheTestUtils#hammerSingleKey to check the performance with read/write contention, and perf number before/after patch are as follows: w/ IdLock and blocksize=16k: {color:red}155.836s{color} w/ IdReadWriteLock and blocksize=16k: {color:green}1.162s{color} Refer to the attached junit screenshot for more details. > Severe performance downgrade when parallel reading a single key from > BucketCache > -------------------------------------------------------------------------------- > > Key: HBASE-14463 > URL: https://issues.apache.org/jira/browse/HBASE-14463 > Project: HBase > Issue Type: Bug > Affects Versions: 0.98.14, 1.1.2 > Reporter: Yu Li > Assignee: Yu Li > Fix For: 2.0.0, 1.2.0, 1.3.0, 1.0.3, 1.1.3, 0.98.16 > > Attachments: HBASE-14463.patch, HBASE-14463_v2.patch, > HBASE-14463_v3.patch, HBASE-14463_v4.patch, HBASE-14463_v5.patch, > TestBucketCache-new_with_IdLock.png, > TestBucketCache-new_with_IdReadWriteLock.png, > TestBucketCache_with_IdLock.png, > TestBucketCache_with_IdReadWriteLock-resolveLockLeak.png, > TestBucketCache_with_IdReadWriteLock.png > > > We store feature data of online items in HBase, do machine learning on these > features, and supply the outputs to our online search engine. In such > scenario we will launch hundreds of yarn workers and each worker will read > all features of one item(i.e. single rowkey in HBase), so there'll be heavy > parallel reading on a single rowkey. > We were using LruCache but start to try BucketCache recently to resolve gc > issue, and just as titled we have observed severe performance downgrade. > After some analytics we found the root cause is the lock in > BucketCache#getBlock, as shown below > {code} > try { > lockEntry = offsetLock.getLockEntry(bucketEntry.offset()); > // ... > if (bucketEntry.equals(backingMap.get(key))) { > // ... > int len = bucketEntry.getLength(); > Cacheable cachedBlock = ioEngine.read(bucketEntry.offset(), len, > bucketEntry.deserializerReference(this.deserialiserMap)); > {code} > Since ioEnging.read involves array copy, it's much more time-costed than the > operation in LruCache. And since we're using synchronized in > IdLock#getLockEntry, parallel read dropping on the same bucket would be > executed in serial, which causes a really bad performance. > To resolve the problem, we propose to use ReentranceReadWriteLock in > BucketCache, and introduce a new class called IdReadWriteLock to implement it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)