Try picking up a single operation say hadoop dfs -ls and start profiling.
- Time the client JVM is taking to start. Enable debug logging on the client
side by exporting HADOOP_ROOT_LOGGER=DEBUG,CONSOLE
- Time between the client starting and the namenode audit logs showing the
read request.
Also, note that JVM startup overhead, etc, means your -ls time is not
completely unreasonable. Using OpenJDK on a cluster of VMs, my hdfs
dfs -ls takes 1.88 seconds according to time (and 1.59 seconds of
user CPU time).
I'd be much more concerned about your slow transfer times. On the
same
Uhhh... Alexey, did you really mean that you are running 100 mega bit per
second network links?
That is going to make hadoop run *really* slowly.
Also, putting RAID under any DFS, be it Hadoop or MapR is not a good recipe
for performance. Not that it matters if you only have 10mega bytes per
I just realized one more thing. You mentioned disk is 700Gb RAID. How many
disks overall? What RAID configuration? Usually we advocate JBOD with hadoop to
avoid performance hits with RAID, and let HDFS itself take care of replication.
May be you are running into this?
Thanks,
+Vinod
On Oct
Hey Alexey,
Have you noticed this right from the start itself? Also, what exactly
do you mean by Limited replication bandwidth between datanodes -
5Mb. - Are you talking of dfs.balance.bandwidthPerSec property?
On Wed, Oct 10, 2012 at 10:53 AM, Alexey alexx...@gmail.com wrote:
Additional info:
Hello Harsh,
I notices such issues from the start.
Yes, I mean dfs.balance.bandwidthPerSec property, I set this property to
500.
On 10/09/12 11:50 PM, Harsh J wrote:
Hey Alexey,
Have you noticed this right from the start itself? Also, what exactly
do you mean by Limited replication
Hi,
OK, can you detail your network infrastructure used here, and also
make sure your daemons are binding to the right interfaces as well
(use netstat to check perhaps)? What rate of transfer do you get for
simple file transfers (ftp, scp, etc.)?
On Wed, Oct 10, 2012 at 12:24 PM, Alexey
Additional info: I also tried to use openjdk instead of sun's - issue
still persists
On 10/09/12 03:12 AM, Alexey wrote:
Hi,
I have an issues with hadoop dfs, I have 3 servers (24Gb RAM on each).
The servers are not overloaded, they just have hadoop installed. One
have datanode and