Re: HBase/HDFS very high iowait

2012-02-22 Thread Per Steffensen
Observe about 50% iowait before even starting clients - that is when 
there is actually no load from clients on the system. So only internal 
stuff in HBase/HDFS can cause this - HBase compaction? HDFS?


Regards, Per Steffensen

Per Steffensen skrev:

Hi

We have a system a.o. with a HBase cluster and a HDFS cluster 
(primarily for HBase persistence). Depending on the environment we 
have between 3 and 8 machine running a HBase RegionServer and a HDFS 
DataNode. OS is Ubuntu 10.04. On those machine we see very high iowait 
and very little real usage of the CPU, and unexpected low throughput 
(HBase creates, updates, reads and short scans). We do not get more 
throughput by putting more parallel load from the HBase clients on the 
HBase servers, so it is a real iowait problem. Any idea what might 
be wrong, and what we can do to improve throughput and lower iowait.


Regards, Per Steffensen




Re: HBase/HDFS very high iowait

2012-02-22 Thread Per Steffensen

Per Steffensen skrev:
Observe about 50% iowait before even starting clients - that is when 
there is actually no load from clients on the system. So only 
internal stuff in HBase/HDFS can cause this - HBase compaction? HDFS?
Ahh ok, that was only for half a minute after restart. So basically down 
to 100% idle when no load from clients.


Regards, Per Steffensen

Per Steffensen skrev:

Hi

We have a system a.o. with a HBase cluster and a HDFS cluster 
(primarily for HBase persistence). Depending on the environment we 
have between 3 and 8 machine running a HBase RegionServer and a HDFS 
DataNode. OS is Ubuntu 10.04. On those machine we see very high 
iowait and very little real usage of the CPU, and unexpected low 
throughput (HBase creates, updates, reads and short scans). We do not 
get more throughput by putting more parallel load from the HBase 
clients on the HBase servers, so it is a real iowait problem. Any 
idea what might be wrong, and what we can do to improve throughput 
and lower iowait.


Regards, Per Steffensen