[ https://issues.apache.org/jira/browse/HBASE-14463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14975824#comment-14975824 ]
Yu Li commented on HBASE-14463: ------------------------------- {quote} I did not do the fixed row keys part as u did for PE tool {quote} well, the "fixed" row keys are also *randomly* generated by PE tool, but I just save them to file and use it to test both scenarios to avoid deviations caused by different key distribution. You could regard the change I made to PE tool as: 1. generate very random keys for read query; 2. test cluster w/ and w/o patch; 3. compare the result. Notice that even with the same impl, perf number in different run diverges and the gap might be as much as 3~5%, which could prove it well that random key distribution could cause fluctuation in perf number. So the most fair way is to test with the same random keys, agree [~anoop.hbase]? > Severe performance downgrade when parallel reading a single key from > BucketCache > -------------------------------------------------------------------------------- > > Key: HBASE-14463 > URL: https://issues.apache.org/jira/browse/HBASE-14463 > Project: HBase > Issue Type: Bug > Affects Versions: 0.98.14, 1.1.2 > Reporter: Yu Li > Assignee: Yu Li > Fix For: 2.0.0, 1.2.0, 1.3.0, 0.98.16 > > Attachments: GC_with_WeakObjectPool.png, HBASE-14463.patch, > HBASE-14463_v11.patch, HBASE-14463_v12.patch, HBASE-14463_v2.patch, > HBASE-14463_v3.patch, HBASE-14463_v4.patch, HBASE-14463_v5.patch, > TestBucketCache-new_with_IdLock.png, > TestBucketCache-new_with_IdReadWriteLock.png, > TestBucketCache_with_IdLock-latest.png, TestBucketCache_with_IdLock.png, > TestBucketCache_with_IdReadWriteLock-latest.png, > TestBucketCache_with_IdReadWriteLock-resolveLockLeak.png, > TestBucketCache_with_IdReadWriteLock.png, pe_use_same_keys.patch, > test-results.tar.gz > > > We store feature data of online items in HBase, do machine learning on these > features, and supply the outputs to our online search engine. In such > scenario we will launch hundreds of yarn workers and each worker will read > all features of one item(i.e. single rowkey in HBase), so there'll be heavy > parallel reading on a single rowkey. > We were using LruCache but start to try BucketCache recently to resolve gc > issue, and just as titled we have observed severe performance downgrade. > After some analytics we found the root cause is the lock in > BucketCache#getBlock, as shown below > {code} > try { > lockEntry = offsetLock.getLockEntry(bucketEntry.offset()); > // ... > if (bucketEntry.equals(backingMap.get(key))) { > // ... > int len = bucketEntry.getLength(); > Cacheable cachedBlock = ioEngine.read(bucketEntry.offset(), len, > bucketEntry.deserializerReference(this.deserialiserMap)); > {code} > Since ioEnging.read involves array copy, it's much more time-costed than the > operation in LruCache. And since we're using synchronized in > IdLock#getLockEntry, parallel read dropping on the same bucket would be > executed in serial, which causes a really bad performance. > To resolve the problem, we propose to use ReentranceReadWriteLock in > BucketCache, and introduce a new class called IdReadWriteLock to implement it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)