I'm pretty sure something else is going on.

1) What does it log when it shuts down? Zookeeper session timeout?
OOME? HDFS errors?

2) Is your cluster meeting all the requirements? Especially the last
bullet point? See
http://hadoop.apache.org/hbase/docs/r0.20.4/api/overview-summary.html#requirements

J-D

On Wed, May 26, 2010 at 9:07 AM, Vidhyashankar Venkataraman
<[email protected]> wrote:
> Are there any side effects to turning major compactions off, other than just 
> a hit in the read performance?
>
> I was trying to merge a 120 Gig update (modify/insert/delete operations) into 
> a 2 TB fully compacted Hbase table with 5 region servers using a map reduce 
> job.. Each RS was serving around 2000 regions (256 MB max size)... Major 
> compactions were turned off before the job started (by setting the compaction 
> period very high to around 4 or 5 days)..
>
> As the job was going on, the region servers just shut down after the table 
> reached near-100% fragmentation (as shown in the web interface)..  On looking 
> at the RS logs, I saw that there were compaction checks for each region which 
> obviously didn't clear, and the RS's shut down soon after the checks..  I 
> tried restarting the database after killing the map reduce job (still, with 
> major compactions turned off).. The RS's shut down soon after booting up..
>
>   Is this expected? Even if the update files (the additional StoreFiles) per 
> region get huge, won't the region get split on its own?
>
> Thank you
> Vidhya
>

Reply via email to