Hello everyone,

I've just installed Hadoop 2.5.1 from source code, and I have problems
changing the default block size. My hdfs-site.xml file I've set the property

  <property>
     <name>dfs.blocksize</name>
     <value>67108864</value>
  </property>

to have blocks of 64 MB, but it seems that the system ignore this
setting. When I copy a new file, it uses a block size of 128M. Only if I
specify the block size when the file is created (ie hdfs dfs
-Ddfs.blocksize=$((64*1024*1024)) -put file .) it uses a block size of
64 MB.

Any idea?

Best regards

Tomas
-- 
Tomás Fernández Pena
Centro de Investigacións en Tecnoloxías da Información, CITIUS. Univ.
Santiago de Compostela
Tel: +34 881816439, Fax: +34 881814112,
https://citius.usc.es/equipo/persoal-adscrito/?tf.pena
Pubkey 1024D/81F6435A, Fprint=D140 2ED1 94FE 0112 9D03 6BE7 2AFF EDED
81F6 435A

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to