you may run into OOM when doing compaction.

On Wed, Jun 12, 2013 at 10:14 AM, Rahul Ravindran <rahu...@yahoo.com> wrote:

> Hello,
> I am trying to understand the downsides of having a large number of hfiles
> by having a large hbase.hstore.compactionThreshold
>
>   This delays major compaction. However, the amount of data that needs to
> be read and re-written as a single hfile during major compaction will
> remain the same unless we have large number of deletes or expired rows
>
> I understand the random reads will be affected since each hfile may be a
> candidate for the row, but is there any other downside I am missing?
>
>
> ~Rahul.

Reply via email to