There is no way to do this for standard Apache Hadoop.

But other, otherwise Hadoop compatible, systems such as MapR do support this
operation.

Rather than push commercial systems on this mailing list, I would simply
recommend anybody who is curious to email me.

On Sat, Aug 27, 2011 at 12:07 PM, Uma Maheswara Rao G 72686 <
mahesw...@huawei.com> wrote:

> Hi Ben,
> Currently there is no way to specify the blocksize from command line in
> Hadoop.
>
> Why can't you write the file from java program?
> Is there any use case for you to write some files only from command line?
>
> Regards,
> Uma
>
> ----- Original Message -----
> From: Ben Clay <rbc...@ncsu.edu>
> Date: Saturday, August 27, 2011 10:03 pm
> Subject: set reduced block size for a specific file
> To: hdfs-user@hadoop.apache.org
>
> > I'd like to set a lowered block size for a specific file.  IE, if
> > HDFS is
> > configured to use 64mb blocks, I'd like to use 32mb blocks for a
> > specificfile.
> >
> >
> >
> > Is there a way to do this from the commandline, without writing a
> > jar which
> > uses org.apache.hadoop.fs.FileSystem.create() ?
> >
> >
> >
> > I tried the following, but it didn't work:
> >
> >
> >
> > hadoop fs -Ddfs.block.size=1048576  -put /local/path /remote/path
> >
> >
> >
> > I also tried -copyFromLocal.  It looks like the -D is being ignored.
> >
> >
> >
> > Thanks.
> >
> >
> >
> > -Ben
> >
> >
> >
> >
>

Reply via email to