Hello,

I have deployed Hadoop on a cluster of 20 machines. I set the replication
factor to one. When I put a file (larger than HDFS block size) into HDFS,
all the blocks are stored on the machine where the Hadoop put command is
invoked.

For higher replication factor, I see the same behavior but the replicated
blocks are stored randomly on all the other machines.

Is this a normal behavior, if not what would be the cause?

Thanks,

Razen

Reply via email to