Do you know if you have enough job-load on the system? One way to look at this
is to look for running map/reduce tasks on the JT UI at the same time you are
looking at the node's cpu usage.
Collecting hadoop metrics via a metrics collection system say ganglia will let
you match up the
Thanks guys. Unfortunately I had started the datanode by local command
rather than from start-all.sh, so the related parts of the logs were
lost. I was watching the cpu loads on all 8 cores via gkrellm at the
time and they were definitely quiet. After a few minutes the jobs seemed
to get in sync