Hi Ben,
Currently there is no way to specify the blocksize from command line in Hadoop.

Why can't you write the file from java program?
Is there any use case for you to write some files only from command line?

Regards,
Uma

----- Original Message -----
From: Ben Clay <rbc...@ncsu.edu>
Date: Saturday, August 27, 2011 10:03 pm
Subject: set reduced block size for a specific file
To: hdfs-user@hadoop.apache.org

> I'd like to set a lowered block size for a specific file.  IE, if 
> HDFS is
> configured to use 64mb blocks, I'd like to use 32mb blocks for a 
> specificfile.
> 
> 
> 
> Is there a way to do this from the commandline, without writing a 
> jar which
> uses org.apache.hadoop.fs.FileSystem.create() ?
> 
> 
> 
> I tried the following, but it didn't work:
> 
> 
> 
> hadoop fs -Ddfs.block.size=1048576  -put /local/path /remote/path
> 
> 
> 
> I also tried -copyFromLocal.  It looks like the -D is being ignored.
> 
> 
> 
> Thanks.
> 
> 
> 
> -Ben
> 
> 
> 
> 

Reply via email to