Hi all,

I have a job that creates very big local files so i need to split it to as many mappers as possible. Now the DFS block size I'm using means that this job is only split to 3 mappers. I don't want to change the hdfs wide block size because it works for my other jobs.

Is there a way to give a specific file a different block size. The documentation says it is, but does not explain how.
I've tried:
hadoop dfs -D dfs.block.size=4M -put file  /dest/

But that does not work.

any help would be apreciated.

Cheers,
Chrulle

Reply via email to