I could buy these results for a totally disk bound application as far as
reads go. I was running some experiments where I have HFiles on disk.
Memory : data ratio is 1:2 - so half the data can fit in memory. Then I run
"new HFileScanner()" and then scanner.seekTo("someKeyValue"). On a 4 HDD
system, I can get ~400 reads max. The hard drives end run quite hot - and
the max I can push this thing to is 500 reads per second. Note that this is
raw HFile seeks - no HBase or HDFS layers are present. I suspect HBase just
issues way more iops than it needs to do.Varun On Wed, Nov 27, 2013 at 12:01 AM, Vladimir Rodionov <[email protected]>wrote: > Oh, I got it. "Next big thing for HBase" is not MapR M7 , but global > optimization and tuning of HBase itself. > > > On Tue, Nov 26, 2013 at 11:56 PM, Vladimir Rodionov > <[email protected]>wrote: > > > Why do you think I got excited? I do not work for MapR. MapR has posted > > benchmark results and some numbers for HBase look quite low. I thought > may > > be community will be interested in these results. > > > > > > On Tue, Nov 26, 2013 at 10:04 PM, lars hofhansl <[email protected]> > wrote: > > > >> Excuse me if I do not get too exited about a report published by MapR > >> that comes to the conclusion that MapR's M7 is faster than "other > >> distribution". > >> > >> -- Lars > >> > >> > >> > >> ________________________________ > >> From: Vladimir Rodionov <[email protected]> > >> To: "[email protected]" <[email protected]> > >> Sent: Tuesday, November 26, 2013 8:00 PM > >> Subject: Next big thing for HBase > >> > >> > >> Global optimization and performance tuning: > >> > >> > http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=19&ved=0CG8QFjAIOAo&url=http%3A%2F%2Fwww.mapr.com%2FDownload-document%2F52-MapR-M7-Performance-Benchmark&ei=QGuVUr-cA6ewjAL_94DoCQ&usg=AFQjCNH2Brlp5n2rIAarEbj39c_X_lnvDg&sig2=bLTKxbspEgsRN3bJXUnspQ&bvm=bv.57155469,d.cGE&cad=rja > >> > >> Some numbers from this report does not look right for HBase. I do not > >> believe that 5 RS on Fusion drive scores only 1605 reads per sec per > node. > >> > > > > >
