Re: how to optimize for heavy writes scenario

2017-03-21 Thread Allan Yang
hbase.regionserver.thread.compaction.small = 30 Am I seeing it right? You used 30 threads for small compaction. That's too much. For heavy writes scenario, you used too much resource to do compactions. We also have OpenTSDB running on HBase in our company. IMHO, the conf should like this: hbase.reg

Re: how to optimize for heavy writes scenario

2017-03-21 Thread Dejan Menges
Regarding du -sk, take a look here https://issues.apache.org/jira/browse/HADOOP-9884 Also hardly waiting for this one to be fixed. On Tue, Mar 21, 2017 at 4:09 PM Hef wrote: > There were several curious things we have observed: > One the region servers, there were abnormal much more reads than

Re: how to optimize for heavy writes scenario

2017-03-21 Thread Hef
There were several curious things we have observed: One the region servers, there were abnormal much more reads than writes: Device:tpskB_read/skB_wrtn/skB_readkB_wrtn sda 608.00 6552.00 0.00 6552 0 sdb 345.00 2692