Hi, The block size for HDFS is currently set to 128MB by defauilt. This is configurable.
My point is that I assume this parameter in hadoop-core.xml sets the block size for both namenode and datanode. However, the storage and random access for metadata in nsamenode is different and suits smaller block sizes. For example in Linux the OS block size is 4k which means one HTFS blopck size of 128MB can hold 32K OS blocks. For metadata this may not be useful and smaller block size will be suitable and hence my question. Thanks, Mich