Recursive delete upon cleanup

2013-07-30 Thread Ron Echeverri
I've run into this problem: 2013-07-30 00:01:02,126 WARN org.apache.hadoop.hbase.master.cleaner.CleanerChore: Error while cleaning the logs java.io.IOException: Could not delete dir maprfs:/hbase-richpush/.archive/rich_push.alias_user, Error: Directory not empty, Try with recursive flag set to tru

Re: Recursive delete upon cleanup

2013-08-01 Thread Ron Echeverri
On Tue, Jul 30, 2013 at 6:19 PM, Ted Yu wrote: > I searched HBase 0.94 code base, hadoop 1 and hadoop 2 code base. > I didn't find where 'Try with recursive flag' was logged. > Mind giving us a bit more information on the Hadoop / HBase releases you > were using ? You're right, that error message

quieting HBase metrics

2013-09-18 Thread Ron Echeverri
In http://hbase.apache.org/book/hbase_metrics.html we see """ 15.4.2. Warning To Ganglia Users Warning to Ganglia Users: by default, HBase will emit a LOT of metrics per RegionServer which may swamp your installation. Options include either increasing Ganglia server capacity, or configuring HBase

Re: quieting HBase metrics

2013-09-23 Thread Ron Echeverri
thing very helpful. Is there some way I could disable metrics for >> every >> > table in HBase from being sent to Ganglia? Or just any information on >> how >> > to configure HBase to emit fewer metrics? >> > >> > >> > Thanks, >> > Arati Pa