Hey Alexey,

Have you noticed this right from the start itself? Also, what exactly
do you mean by "Limited replication bandwidth between datanodes -
5Mb." - Are you talking of dfs.balance.bandwidthPerSec property?

On Wed, Oct 10, 2012 at 10:53 AM, Alexey <alexx...@gmail.com> wrote:
> Additional info: I also tried to use openjdk instead of sun's - issue
> still persists
>
> On 10/09/12 03:12 AM, Alexey wrote:
>> Hi,
>>
>> I have an issues with hadoop dfs, I have 3 servers (24Gb RAM on each).
>> The servers are not overloaded, they just have hadoop installed. One
>> have datanode and namenode, second - datanode only, third - datanode and
>> secondarynamenode.
>>
>> Hadoop datanodes have a max memory limit 8Gb. Default replication factor
>> - 2. Limited replication bandwidth between datanodes - 5Mb.
>>
>> I've setupped hadoop to communicate between nodes by IP address.
>> Everything is works - I can read/write files on each datanode, etc. But
>> the issue is that hadoop dfs commands are executing very slow, even
>> "hadoop dfs -ls /" takes about 3 seconds to execute, but it have only
>> one folder /user in it.
>> Files are also uploading to the hdfs very slow - hundreds kilobytes/second.
>>
>> I'm using Debian stable x86-64 distribution and hadoop running through
>> sun-java6-jdk 6.26-0squeeze1
>>
>> Please give me any suggestions what I need to adjust/check to arrange
>> this issue.
>>
>> As I said before - overall hdfs configuration is correct, because
>> everything works except performance.
>>
>> --
>> Best regards
>> Alexey
>>
>
> --
> Best regards
> Alexey



-- 
Harsh J

Reply via email to