One tunable is the scanner caching, which is available in this call:
http://hadoop.apache.org/hbase/docs/r0.20.3/api/org/apache/hadoop/hbase/client/HTable.html#setScannerCaching(int)

-ryan

On Fri, Feb 5, 2010 at 1:11 PM, Gabriel Ki <gab...@gmail.com> wrote:
> In page 3 of
> http://www.slideshare.net/schubertzhang/hbase-0200-performance-evaluation,
> for 0.20.0
> random reads: 1106, sequential reads: 5433.  It is about 1 to 5.
>
> In the wiki, http://wiki.apache.org/hadoop/Hbase/PerformanceEvaluation, for
> 0.19.0RC1
> random reads: 540, sequential reads: 464. It is about 1 to 1.
>
> I was doing similar performance evaluation against hbase 0.20.3 hadoop
> 0.20.1.  I did not get as good sequential read as that and would like to
> know the interesting fact what has changed from 0.19.0RC1 to 0.20.x to make
> that improvement.
>
> Thanks,
> -gabe
>
>
> On Thu, Feb 4, 2010 at 10:58 PM, Stack <st...@duboce.net> wrote:
>
>> On Wed, Feb 3, 2010 at 3:58 PM, Gabriel Ki <gab...@gmail.com> wrote:
>> > Hi,
>> >
>> > I was reading
>> >
>> http://www.slideshare.net/schubertzhang/hbase-0200-performance-evaluation.
>> > Could someone explain what has changed to improve the random reads to
>> > sequential reads ratio from 1:1 to 1:5?  I don't seem to reproduce such
>> good
>> > sequential reads.
>> >
>> Which slide is that Gabriel?  Its saying that we do 5 sequential reads
>> per random read?  What are you seeing?  Why not scan instead of do
>> sequential reads?
>>
>> St.Ack
>>
>

Reply via email to