Hi
As of Hadoop 2.6, default blocksize is 128 MB (look for dfs.blocksize)
https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml
About the 100 MB file for a 64 MB blocksize, as said there are 2 blocks
of 64 and 36 MB each. There is no 28 MB thing, it's just a sm
Hi,
Answering the first question;
What happens is that the client on the machine issuing the command
'copyfromlocal' first creates a new instance of DistributedFileSystem,
which makes an RPC call to the NameNode to create a new file in the
filesystem's namespace, the NN performs various check
Hello Every one,
I have couple of doubts can any one please point me in right direction.
1>What exactly happen when I want to copy 1TB file to Hadoop Cluster using
copyfromlocal command
1> what will be the split size? will it be same as the block size?
2> What is a block and split?
If we have