Hi Paul,

generally should be possible, others are using it for TS (have a look at the schema @ opentsdb.net if you have not done so) .

What does your row key schema and a typical read access look like (scan over many rows / multiple regions ...)?

Cheers


On 02/09/2012 02:12 PM, Paul Nickerson wrote:

I'm trying to create a time series table that contains a couple of billion 
rows. It contains daily values for several millions of items. This table will 
be visible to the outside world, so it should be able to support lots of reads 
at any point in time. My plan is to every night use map/reduce to batch load 
the days values for each of the items into that table. The problem seems to be 
that read performance is dismal while I'm writing data to the table.


Is there any way to accomplish what I'm trying to do? Fwiw I'm currently using 
the hive hbase integration to load data to the hbase table.


Thank you,
Paul Nickerson

Reply via email to