Hi
I think metadata size is not greatly different. The problem is the number
of blocks. The block size is lesser than 64MB, more block generated with
the same file size(if 32MB then 2x more blocks).
And, yes. all metadata is in the namenode's heap memory.
Thanks.
Drake 민영근 Ph.D
kt NexR
On Tue
thank you for the explanation, and how much byte each metadata will
consuming in RAM if BS is 64MB or smaller than that? I heard every metadata
will store on RAM right?
If you set the BS lesser then 64MB, you’ll get into Namenode issues when a
larger file will be read by a client. The client will ask for every block the
NN - imagine what happen when you want to read a 1TB file.
The optimal BS size is 128MB. You have to have in mind, that every block will
be re
The default HDFS block size 64 MB means, it is the maximum size of block of
data written on HDFS. So, if you write 4 MB files, they will still be
occupying only 1 block of 4 MB size, not more than that. If your file is
more than 64MB, it gets split into multiple blocks.
If you set the HDFS block s
Hi guys, I have a couple question about HDFS block size:
What if I set my HDFS block size from default 64 MB to 2 MB each block,
what will gonna happen?
I decrease the value of a block size because I want to store an image file
(jpeg, png etc) that have size about 4MB each file, what is your opin